diff --git a/samples/texts/1239855/page_1.md b/samples/texts/1239855/page_1.md new file mode 100644 index 0000000000000000000000000000000000000000..3cecb50d7f8d94e5cc63e6ee30aca2b80fa30a9f --- /dev/null +++ b/samples/texts/1239855/page_1.md @@ -0,0 +1,88 @@ +Gap solitons in a two-channel microresonator structure + +Suresh Pereira and J. E. Sipe + +Department of Physics, University of Toronto, Toronto, Ontario M5S 1A7, Canada + +John E. Heebner and Robert W. Boyd + +Institute of Optics, University of Rochester, Rochester, New York 14627 + +Received October 5, 2001 + +We show that, when two channel waveguides are coupled by a sequence of periodically spaced microresonators, the group-velocity dispersion is low in the vicinity of the gap associated with the resonant frequency of the resonators. This low dispersion permits the excitation of a gap soliton with much lower energy than in a gap of similar width caused by Bragg reflection. © 2002 Optical Society of America + +OCIS codes: 190.5530, 190.4390. + +We consider nonlinear optical propagation in two +channel waveguides coupled by periodically spaced +microresonators [Fig. 1(a)]; we call such a device a +two-channel, side-coupled, integrated, spaced sequence +of resonators (SCISSOR).¹ The nonlinear properties +of a similar structure with only one channel guide +were studied previously,¹,² as were the linear proper- +ties of the two-channel structure.³,⁴ In a bottom (top) +mode, light traveling in the forward direction in the +bottom (top) channel is coupled via the resonator to +light traveling in the backward direction in the top +(bottom) channel. Two types of gap open in the +dispersion relation: Bragg gaps associated with the +resonator spacing, *d*, and resonator gaps associated +with *ρ*, the radius of the resonators. We show that, +for a Kerr nonlinear SCISSOR structure, the propaga- +tion of optical pulses is well described by a nonlinear +Schrödinger equation (NLSE). The NLSE supports +soliton solutions, and we find that much less energy is +required for exciting a gap soliton in a resonator gap +than in a Bragg gap with the same gap width. + +sociated with the waveguide; we ignore any small fre- +quency dependence of neff. Without loss of generality, +we assume that light is traveling forward (backward) +in the bottom (top) channel. At the coupling points we +use the model1 + +$$ +\begin{bmatrix} q(0_+) \\ l(a_+) \end{bmatrix} = \begin{bmatrix} \sigma_b & i\kappa_b \\ i\kappa_b & \sigma_b \end{bmatrix} \begin{bmatrix} q(0_-) \\ l(a_-) \end{bmatrix}, \quad (1) +$$ + +where $a_{\pm} = a \pm (\delta a)$ and $0_{\pm} = \pm \delta \theta$, where $\delta a$ and $\delta \theta$ are infinitesimal quantities. A similar expression is used for the top channel coupling point. To conserve energy, the coupling coefficients, $\sigma_b$, $\sigma_t$, $\kappa_b$, and $\kappa_t$, satisfy $|\sigma_i|^2 + |\kappa_i|^2 = 1$ and $\sigma_i^* \kappa_i = \sigma_i \kappa_i^*$, where $i = b, t$. Combining the effects of phase accumulation with those of coupling [Eq. (1)], we determine an expression that relates $l(d)$ and $u(d)$ to $l(0)$ and $u(0)$. + +Searching for the Bloch solution, we write $l(d) = \exp(ikd)l(0)$ and $u(d) = \exp(ikd)u(0)$, where $k$ is the Bloch wave number. Tracing the fields through the system, we find that + +$$ +\left[ \begin{array}{cc} \exp(ivd)(\beta_b\beta_t - \alpha^2) - \beta_t \exp(ikd) & \alpha \\ -\alpha & -\beta_t \exp(ikd) \end{array} \right] \left[ \begin{array}{c} l(0) \\ u(0) \end{array} \right] = 0, \quad (2) +$$ + +We begin in the linear regime and denote the elec- +tric field in the bottom channel **L**(**r**) = **S**(x,y)l(z)ŷ, in +the top channel **U**(**r**) = **S**(x,y)u(z)ŷ, and in the mi- +croresonator **Q**(y,R,θ) = **T**(y,R)q(θ)ŷ, where **S**(x,y) +[**T**(y,R)] is the mode profile associated with the chan- +nel waveguides (resonator waveguides), **R** is the ra- +dial variable, and θ is the angle within the resonator, +measured counterclockwise from the bottom coupling +points [see Fig. 1(b)]. We consider only the largest +Cartesian component of the electric field in the chan- +nel and resonator, and to make the nonlinear term +in the equation more tractable we take it to be the y +component. + +Away from the coupling points, the effect of propaga- +tion is the accumulation of phase by means of propaga- +tion constant v = neffω/c, where ω is the frequency of +the light and neff is the effective index of refraction as- + +where $\alpha = i\gamma\kappa_b\kappa_t \exp(i\pi\nu\rho)$, $\beta_i = [\sigma_i + i\gamma\sigma_i\kappa_i] \times \exp(2i\pi\nu\rho)$], and $\gamma = i[1 - \sigma_b\sigma_t \exp(2i\pi\nu\rho)]^{-1}$. Equation (2) has nontrivial solutions only when the determinant of the matrix vanishes, from which we find an expression for the wave number, $k(\omega)$, that we can invert to determine the dispersion relation, $\omega(k)$. + +We define the Bragg frequency, $\omega_b/c = \pi/(n_{\text{eff}}d)$, +and the resonator frequency, $\omega_r/c = 1/(n_{\text{eff}}\rho)$. +In Fig. 2 we plot the dispersion relation in the +reduced band picture for a symmetric, two-channel +SCISSOR structure with $n_{\text{eff}} = 3.47$, $\sigma_b = \sigma_t = 0.98$, +$2\pi\rho = 26$ µm, and $d = 16$ µm. There are two types of +gap: the 72nd-order Bragg gap at $\omega/c \approx 4.075$ µm$^{-1}$ +and the 59th-order resonator gap at $\omega/c \approx 4.11$ µm$^{-1}$. +The upper and lower edges of the photonic bandgap +occur at $k=0$, whereas for the resonator gap they +occur at $k=0$, $\pi/d$. In the vicinity of a Bragg gap the \ No newline at end of file diff --git a/samples/texts/1239855/page_2.md b/samples/texts/1239855/page_2.md new file mode 100644 index 0000000000000000000000000000000000000000..4a06f3c22ba7ab5a459972dc8342cc8a4728e363 --- /dev/null +++ b/samples/texts/1239855/page_2.md @@ -0,0 +1,91 @@ +Fig. 1. (a) Schematic of the two-channel SCISSOR. +(b) One unit cell of the structure. Filled circles, coupling points at the top and the bottom of the microresonator. + +curvature of the dispersion relation is high, whereas +near a resonator gap the bands are almost completely +flat. + +We now derive the NLSE that is relevant to the +two-channel SCISSOR structure. We require the +Bloch functions, $\mathbf{E}_{mk}(\mathbf{r})$, of the electric field,⁵ where +$m$ is the index of the band. We can determine these +functions by using the eigenvectors of Eq. (2) to find +the electric field everywhere within one unit cell and +then normalizing the field according to + +$$ +\int \frac{\mathrm{d}\mathbf{r}}{\Omega_{\mathrm{cell}}} n^2(\mathbf{r}) \mathbf{E}_{mk}^*(\mathbf{r}) \cdot \mathbf{E}_{m'k'}(\mathbf{r}) = \delta_{mn'} \delta_{kk'}, \quad (3) +$$ + +where $\Omega_{\text{cell}}$ is a normalization volume associated with +one unit cell of the periodic medium. We assume that +light is propagating in either a bottom or a top mode +but not in both. We label the carrier wave number +of the light $\bar{k}$, which corresponds to a frequency $\bar{\omega} =$ +$\omega(\bar{k})$, and introduce a field, $g_{m\bar{k}}(z, t)$, that is related to +the energy in the electromagnetic field to lowest order +through + +$$ +\epsilon = \int |g_{m\bar{k}}(z,t)|^2 dz . \tag{4} +$$ + +The NLSE is + +$$ +\left(i \frac{\partial}{\partial t} + i \omega_m \bar{k}' \frac{\partial}{\partial z}\right) g_{m\bar{k}}(\bar{z}, t) = -\frac{1}{2} \omega_{m\bar{k}''} \frac{\partial^2 g_{m\bar{k}}(\bar{z}, t)}{\partial z^2} \\ +- \Gamma_{m\bar{k}} |g_{m\bar{k}}(\bar{z}, t)|^2 g_{m\bar{k}}(\bar{z}, t), \quad (5) +$$ + +where $\omega_{m\bar{k}'} = \partial\omega/\partial k|_{\bar{k}}$ is the group velocity at the carrier wave number and $\omega_{m\bar{k}''} = \partial^2\omega/\partial k^2|_{\bar{k}}$ is the group-velocity dispersion. The nonlinear coefficient is given by + +$$ +\Gamma_{m\bar{k}} = \frac{3\bar{\omega}}{4A_{\text{eff}}\epsilon_0} \int_{\text{cell}} \frac{\mathrm{d}\mathbf{r}}{\Omega_{\text{cell}}} \chi^{(3)}(\mathbf{r}) |\mathbf{E}_{m\bar{k}}(\mathbf{r})|^4, \quad (6) +$$ + +where $\chi^{(3)}(\mathbf{r})$ is the nonlinear susceptibility of the +medium and is assumed to have the same periodicity +as the device and $A_{\text{eff}}$ is an effective area associated +with the cross section of the channel waveguides. + +The region of validity of Eq. (5) has been extensively +discussed.⁵,⁶ + +When the carrier frequency of the field is at an up- +per band edge, the NLSE [Eq. (5)] supports gap soliton +solutions.⁷ We set $m = u$ to represent the upper band. +The solitons have the form⁷ + +$$ +g_{u\bar{k}}(z,t) = A \exp(iB_2z) \exp[-i(\delta + \Delta)t] \operatorname{sech}(B_1z - Ct), +$$ + +with $A = (-2\delta/\Gamma_{uk\bar{k}})^{1/2}$, $B_1 = (-2\delta/\omega_{uk''})^{1/2}$, +$B_2 = (+2\Delta/\omega_{uk''})^{1/2}$, and $C = \omega_{uk''}B_1B_2$, where +$\omega_{uk''}$ is the group-velocity dispersion at the upper +band edge and where the signs of the detunings $\delta$ and +$\Delta$ are chosen such that these coefficients come out to be +real; $\delta$ determines the height and the spatial width of +the soliton, whereas $\Delta$ determines the velocity. The +center frequency of the soliton is $\omega_c = \omega_{uk\bar{k}} + \delta + \Delta$, +and the frequency width of the pulse is denoted $C$. For most of the frequencies of the pulse to be contained +within the gap we require that $\omega_c + 2C \le \omega_{uk\bar{k}}$. It +is easy to confirm that this condition can be met for +an arbitrary value of $C$ if we set $\Delta = C/(2M)$ and +$\delta = -C(M/2)$, where $M \ge 4$. However, the pulse +width is limited by the fact that NLSE (5) is valid +only for frequencies slightly inside the gap. If we fix +$M=4$ then we should have $C(\delta\bar{\omega})/20$, where $\bar{\delta}\bar{\omega}$ is +the width of the gap. We have verified that a pulse +of this width and central frequency is well described +by the NLSE. + +Using the form of $g_{u\bar{k}}$ [Eq. (7)] in the expression for the energy [Eq. (4)], we find that $\epsilon_{u\bar{k}}^{\text{soliton}} = (2\sqrt{2|\delta|}/\Gamma_{u\bar{k}})\sqrt{\omega_{u\bar{k}}''}$. Because the group-velocity dispersion near a resonator gap is so much smaller than near a Bragg gap, the energy required for exciting a gap soliton with the same pulse width (C), and the same depth within the gap ($\delta + \Delta$), is much lower in a resonator gap; furthermore, the resonator soliton will travel with a slower group velocity and will + +Fig. 2. Dispersion relation for the two-channel SCIS- +SOR with material parameters given in the text. The +gap at $\omega/c = 4.11~\mu\text{m}^{-1}$ is associated with the 59th- +order resonance of the microresonator. The gap at +$\omega/c = 4.075~\mu\text{m}^{-1}$ is associated with Bragg reflection. +These gaps are at typical communications wavelengths, as +indicated by the right-hand axis. \ No newline at end of file diff --git a/samples/texts/1239855/page_3.md b/samples/texts/1239855/page_3.md new file mode 100644 index 0000000000000000000000000000000000000000..2e998d4d2caf5a2660df522e4a758074f4e8a2d8 --- /dev/null +++ b/samples/texts/1239855/page_3.md @@ -0,0 +1,29 @@ +Fig. 3. $S_{\text{sol}}$ is the ratio of the energy required for forming a gap soliton in a resonator gap to the energy required for forming the same gap soliton in a Bragg gap with the same gap width relative to its center frequency. + +consequently have a smaller spatial width. We define a quantity $S_{\text{sol}}(\bar{\delta}\omega) = \epsilon_{mk}^{\text{solitonのres}}/\epsilon_{mk}^{\text{solitonのBragg}}$, where $\epsilon_{mk}^{\text{solitonのres(Bragg)}}$ is the energy required for exciting a gap soliton in a resonator (Bragg) gap; $S_{\text{sol}}$ is a measure of how much easier it is to form a gap soliton in a resonator gap than in a Bragg gap. To make this comparison we consider one system in which $\omega_{uk}^-$ corresponds to the upper band edge of a resonator gap and another in which the same frequency $\omega_{uk}$ corresponds to the upper band edge of a Bragg gap. The overlap integrals that we use to determine the nonlinear coefficient, $\Gamma_{mk}$, are roughly equal at the gaps, so $S_{\text{sol}} \approx (\omega_{\text{res}''}/\omega_{\text{Bragg}'})^{1/2}$. We use physical parameters defined above but vary the values of $\sigma$ and $d$ to achieve different gap widths and center frequencies. + +In Fig. 3 we plot the value of $S_{\text{sol}}$ as a function of the gap width ($\bar{\delta}\omega/\omega_{uk}$). For a small gap width, $(\bar{\delta}\omega/\omega_{uk}) = 10^{-6}$, $S_{\text{sol}} = 10^{-4}$; for gap width $(\bar{\delta}\omega/\omega_{uk}) = 10^{-4}$, which is more realistic, $S_{\text{sol}} = 10^{-2}$. Of course, material and mode dispersion, both neglected in our calculations, will set a lower bound on $S_{\text{sol}}$. The low energy requirements for gap solitons in a resonator gap are balanced by a much longer soliton + +formation length,⁸ but for switching applications this restriction is not so important. A pulse with a form similar to Eq. (7) but with a much lower amplitude will be unable to propagate, because all its frequencies lie within the gap. By contrast, if the pulse has the correct amplitude, it will form into a soliton while it propagates. If the initial pulse is close to a soliton, then reshaping should be minimal. + +In conclusion, we have investigated optical propagation in a two-channel SCISSOR structure with a weak Kerr nonlinearity. We have presented a NLSE that accurately describes propagation near the band edges of a resonator gap if the light is propagating in only one mode of the system. The energy required for forming a gap soliton is much smaller than in a Bragg gap of similar width. We note, too, that whereas the one-channel SCISSOR structure investigated by Heebner et al.¹ supports solitons that can travel with a small group velocity, that velocity can never vanish; furthermore, that structure possesses no gap, so a true gap soliton could not be launched. We intend to extend the analysis to coupled gap solitons and to discuss the issues involved in experimentally launching and observing gap solitons. + +This research was supported by the Natural Science and Engineering Research Council of Canada and by Photonics Research Ontario. S. N. Pereira's e-mail address is pereira@physics.utoronto.ca. + +## References + +1. J. E. Heebner, R. W. Boyd, and Q.-H. Park, "Slow light, induced dispersion, enhanced nonlinearity and optical solitons in a resonator-array waveguide," submitted to Phys. Rev. Lett. + +2. J. E. Heebner, R. W. Boyd, and Q.-H. Park, "SCISSOR solitons and other novel propagation effects in micro-resonator modified waveguides," J. Opt. Soc. Am. B (to be published). + +3. B. E. Little, S. T. Chu, J. V. Hryniewicz, and P. P. Absil, Opt. Lett. **25**, 344 (2000). + +4. A. Melloni, Opt. Lett. **26**, 917 (2001). + +5. N. A. R. Bhat and J. E. Sipe, Phys. Rev. E **64**, 0566-04 (2001). + +6. S. Pereira and J. E. Sipe, Phys. Rev. E **62**, 5745 (2000). + +7. C. M. de Sterke and J. E. Sipe, Opt. Lett. **14**, 871 (1989). + +8. G. P. Agrawal, *Non-Linear Fiber Optics* (Academic, San Diego, Calif., 1989). \ No newline at end of file diff --git a/samples/texts/142615/page_1.md b/samples/texts/142615/page_1.md new file mode 100644 index 0000000000000000000000000000000000000000..a24b630ea6ca3fbe3f461904e63eab0261f6363b --- /dev/null +++ b/samples/texts/142615/page_1.md @@ -0,0 +1,27 @@ +# One-to-One Disjoint Path Covers in DCell + +Xi Wang, Jianxi Fan, Baolei Cheng, Wenjun Liu, Yan Wang + +► To cite this version: + +Xi Wang, Jianxi Fan, Baolei Cheng, Wenjun Liu, Yan Wang. One-to-One Disjoint Path Covers in DCell. 10th International Conference on Network and Parallel Computing (NPC), Sep 2013, Guiyang, China. pp.61-70, 10.1007/978-3-642-40820-5_6 . hal-01513760 + +HAL Id: hal-01513760 + +https://hal.inria.fr/hal-01513760 + +Submitted on 25 Apr 2017 + +**HAL** is a multi-disciplinary open access +archive for the deposit and dissemination of sci- +entific research documents, whether they are pub- +lished or not. The documents may come from +teaching and research institutions in France or +abroad, or from public or private research centers. + +L'archive ouverte pluridisciplinaire **HAL**, est +destinée au dépôt et à la diffusion de documents +scientifiques de niveau recherche, publiés ou non, +émanant des établissements d'enseignement et de +recherche français ou étrangers, des laboratoires +publics ou privés. \ No newline at end of file diff --git a/samples/texts/142615/page_10.md b/samples/texts/142615/page_10.md new file mode 100644 index 0000000000000000000000000000000000000000..fbffb7f95b154372f2997c69420f4fd5441e3902 --- /dev/null +++ b/samples/texts/142615/page_10.md @@ -0,0 +1,13 @@ +Case 2. $\alpha \neq \beta$ and $(x, y) \in E(DCell_{\tau+1})$. Let $P_1 = \langle x, y \rangle$. Select $x_0 \in V(DCell_{\tau}^{\alpha})$ (resp. $y_0 \in V(DCell_{\tau}^{\beta})$), such that $(x, x_0) \in E(DCell_{\tau}^{\alpha})$ (resp. $(y, y_0) \in E(DCell_{\tau}^{\beta})$). According to the induction hypothesis, there exist $(\tau + 1)$ vertex disjoint paths $\{P'_i|2 \le i \le \tau + 2\}$ (resp. $\{Q'_j|2 \le j \le \tau + 2\}$) between any two distinct vertices $x$ and $x_0$ (resp. $y_0$ and $y$) in $DCell_{\tau}^{\alpha}$ (resp. $DCell_{\tau}^{\beta}$). Let $P''_2 = \langle x, x_0 \rangle$ (resp. $Q''_2 = \langle y_0, y \rangle$), $P'_i = \langle x, \dots, x_i, x_0 \rangle$ (resp. $Q'_j = \langle y_0, y_j, \dots, y \rangle$), and $P''_i = P'_i - (x_i, x_0)$ (resp. $Q''_j = Q'_j - (y_0, y_j)$) with $3 \le i \le \tau+2$ (resp. $3 \le j \le \tau+2$). Furthermore, let $z_i \in V(DCell_{\tau}^{\gamma_i})$ (resp. $w_j \in V(DCell_{\tau}^{\delta_j})$) with $2 \le i \le \tau+2$ (resp. $2 \le j \le \tau+2$) and $(x_i, z_i) \in E(DCell_{\tau+1})$ (resp. $(y_i, w_j) \in E(DCell_{\tau+1})$). Let $W_0 = \bigcup_{\theta=2}^{\tau+2} DCell_{\tau}^{\theta}$, $W_1 = \bigcup_{\theta=2}^{\tau+2} DCell_{\tau}^{\delta_{\theta}}$ and $W = W_0 \cup W_1 \cup DCell_{\tau}^{\alpha} \cup DCell_{\tau}^{\beta}$. For $2 \le i \le \tau + 2$, we can claim the following two subcases with respect to $DCell_{\tau}^{\gamma_i}$. + +Case 2.1. $DCell_{\tau}^{\gamma_i} \subseteq W_1$. Select $w_j \in V(DCell_{\tau}^{\gamma_i})$ such that $2 \le j \le \tau + 2$. According to Theorem 1, there exists a path a $S$ from $z_i$ to $w_j$ in $DCell_{\tau}^{\gamma_i}$. Furthermore, let $W = W \cup DCell_{\tau}^{\gamma_i}$ and $P_i = P_i'' + (x_i, z_i) + S + (w_j, y_j) + Q_j''$. + +Case 2.2. $DCell_{\tau}^{\gamma_i} \not\subseteq W_1$. Select $DCell_{\tau+1}^{\delta_j} \not\subseteq W$ such that $2 \le j \le \tau + 2$. Then, choose $DCell_{\tau}^{p}$ and $DCell_{\tau}^{q}$, such that three graphs $DCell_{\tau}^{p}$, $DCell_{\tau}^{q}$, and $W$ are internally vertex-independent with $p, q \in \{0, 1, \dots, t_k\}$. Let $W'_i = DCell_{\tau}^{\gamma_i} \cup DCell_{\tau}^{\delta_j} \cup DCell_{\tau}^{p} \cup DCell_{\tau}^{q}$, according to Lemma 4, there exists a path $S$ from $z_i$ to $w_j$ in $DCell_{\tau}[W'_i]$. Furthermore, let $W = W \cup W'_i$ and $P_i = P_i'' + (x_i, z_i) + S + (w_j, y_j) + Q_j''$. + +Furthermore, select $P_i$, such that $z_i \notin V(W_1)$ and $w_j \in V(W'_i)$ where $2 \le i \le \tau + 2$ and $2 \le j \le \tau + 2$. According to Lemma 4, there exists path $S$ from $z_i$ to $w_j$ in $DCell_{\tau+1}[V(W'_i) \cup (V(DCell_{\tau+1}) - V(W))]$. Furthermore, let $P_i = P_i'' + (x_i, z_i) + S + (w_j, y_j) + Q_j''$. + +According to above discussions, there exist $(\tau + 2)$ vertex disjoint paths $\{P_i|1 \le i \le \tau + 2\}$ between any two distinct vertices $x$ and $y$ of $DCell_{\tau+1}$. + +Case 3. $\alpha \neq \beta$ and $(x, y) \notin E(DCell_{\tau+1})$. Select $u \in V(DCell_{\tau+1})$ (resp. $v \in V(DCell_{\tau+1})$), such that $(x, u) \in E(DCell_{\tau+1})$ (resp. $(y, v) \in E(DCell_{\tau+1})$), $u \in DCell_{\tau}^{\phi}$ (resp. $v \in DCell_{\tau}^{\psi}$), and $\phi, \psi \in \{0, 1, \dots, t_k\}$, where $DCell_{\tau}^{\alpha}$ and $DCell_{\tau}^{\beta}$ (resp. $DCell_{\tau}^{\phi}$ and $DCell_{\tau}^{\psi}$) are internally vertex-independent. We can claim the following three subcases with respect to $u$ and $v$. + +Case 3.1. $u \in V(DCell_{\tau}^{\beta})$. Select $x_0 \in V(DCell_{\tau}^{\alpha})$, such that $(x, x_0) \in E(DCell_{\tau}^{\alpha})$. Let $y_0 = u$. According to the induction hypothesis, there exist $(\tau + 1)$ vertex disjoint paths $\{P'_i|2 \le i \le \tau + 2\}$ (resp. $\{Q'_j|1 \le j \le \tau + 1\}$) between any two distinct vertices $x$ and $x_0$ (resp. $y_0$ and $y$) in $DCell_{\tau}^{\alpha}$ (resp. $DCell_{\tau}^{\beta}$). Let $P_1 = (x, y_0) + Q'_1$ and $Q''_{\tau+2} = \emptyset$. Then, let $P''_2 = \langle x, x_0 \rangle$, $P'_i = \langle x, \dots, x_i, x_0 \rangle$ (resp. $Q'_j = \langle y_0, y_j, \dots, y \rangle$), and $P''_i = P'_i - (x_i, x_0)$ (resp. $Q''_j = Q'_j - (y_0, y_j)$) with $3 \le i \le \tau + 2$ (resp. $2 \le j \le \tau + 1$). Furthermore, let $z_i \in V(DCell_{\tau}^{\gamma_i})$ (resp. $w_j \in V(DCell_{\tau}^{\delta_j})$), where $2 \le i \le \tau + 2$ (resp. $1 \le j \le \tau + 1$), $w_{\tau+2} = v \in V(DCell_{\tau}^{\delta_{\tau+2}})$, and $(x_i, z_i) \in E(DCell_{\tau+1})$ (resp. $(y_i, w_j) \in E(DCell_{\tau+1})$). The required $\{P_i|2 \le i \le \tau + 2\}$ paths can be derived by the similar approach as the Case 2, so we skip it. \ No newline at end of file diff --git a/samples/texts/142615/page_11.md b/samples/texts/142615/page_11.md new file mode 100644 index 0000000000000000000000000000000000000000..b5f79cac0ca9144f5f8e2b4089909769f18886cc --- /dev/null +++ b/samples/texts/142615/page_11.md @@ -0,0 +1,21 @@ +According to discussions in Case 3 and Case 3.1, there exist $(\tau + 2)$ vertex disjoint paths $\{P_i|1 \le i \le \tau + 2\}$ between any two distinct vertices $x$ and $y$ of $DCell_{\tau+1}$. + +Case 3.2. $v \in V(DCell_{\tau}^{\alpha})$. The required paths can be derived by the similar approach as the Case 3.1, so we skip it. + +Case 3.3. $u \notin V(DCell_{\tau}^{\beta})$ and $v \notin V(DCell_{\tau}^{\alpha})$. Let $P_1'' = Q_1'' = \emptyset$, $x_1 = x$, $z_1 = u$, $w_1 = v$ and $y_1 = y$. Select $x_0 \in V(DCell_{\tau}^{\alpha})$ (resp. $y_0 \in V(DCell_{\tau}^{\beta})$), such that $(x, x_0) \in E(DCell_{\tau}^{\alpha})$ (resp. $(y, y_0) \in E(DCell_{\tau}^{\beta})$). According to the induction hypothesis, there exist $(\tau+1)$ vertex disjoint paths $\{P_i'|2 \le i \le \tau+2\}$ (resp. $\{Q_j'|2 \le j \le \tau+2\}$) between any two distinct vertices $x$ and $x_0$ (resp. $y_0$ and $y$) in $DCell_{\tau}^{\alpha}$ (resp. $DCell_{\tau}^{\beta}$). Let $P_2'' = \langle x, x_0 \rangle$ (resp. $Q_2'' = \langle y_0, y \rangle$), $P_i' = \langle x, \dots, x_i, x_0 \rangle$ (resp. $Q_j' = \langle y_0, y_j, \dots, y \rangle$), and $P_i'' = P_i' - (x_i, x_0)$ (resp. $Q_j'' = Q_j' - (y_0, y_j)$) with $3 \le i \le \tau+2$ (resp. $3 \le j \le \tau+2$). Furthermore, let $z_i \in V(DCell_{\tau+1}^{\gamma_i})$ (resp. $w_j \in V(DCell_{\tau+1}^{\delta_j})$), where $2 \le i \le \tau+2$ (resp. $2 \le j \le \tau+2$) and $(x_i, z_i) \in E(DCell_{\tau+1})$ (resp. $(y_i, w_j) \in E(DCell_{\tau+1}))$. The required $\{P_i|1 \le i \le \tau+2\}$ paths can be derived by the similar approach as the Case 2, so we skip it. + +According to discussions in Case 3 and Case 3.3, there exist $(\tau + 2)$ vertex disjoint paths $\{P_i|1 \le i \le \tau + 2\}$ between any two distinct vertices $x$ and $y$ of $DCell_{\tau+1}$. + +In summary, for any two distinct vertices $x, y \in V(DCell_{\tau+1})$, there exist $(\tau+2)$ vertex disjoint paths $\{P_i|1 \le i \le \tau+2\}$ between any two distinct vertices $x$ and $y$ of $DCell_{\tau+1}$. $\square$ + +**Lemma 6.** For any $t_0 \ge 3$ and $k \ge 0$, $DCell_k$ is $(k+t_0-1)$-DPC-able. + +*Proof.* We will prove this lemma by induction on the dimension $k$ of DCell. For any $t_0 \ge 3$, by Lemma 1, the lemma holds for $k=0$. For any $t_0 \ge 3$, supposing that the lemma holds for $k=\tau$, where $\tau \ge 0$, the proof that the lemma holds for $k=\tau+1$ is similar to that of lemma 5 and thus omitted. $\square$ + +**Theorem 3.** $DCell_k$ is $(k+t_0-1)$-DPC-able with $k \ge 0$. + +*Proof.* By Lemma 1, the theorem holds for $k=0$ and $t_0 \ge 2$. By Lemma 2, the theorem holds for $k=1$ and $t_0=2$. By Lemma 5, the theorem holds for $k \ge 2$ and $t_0=2$. By Lemma 6, the theorem holds for $t_0 \ge 3$ and $k \ge 0$. $\square$ + +# 4 Conclusions + +DCell has been proposed for one of the most important data center networks and can support millions of servers with outstanding network capacity and provide good fault tolerance by only using commodity switches. In this paper, we prove that there exist $r$ vertex disjoint paths $\{P_i|1 \le i \le r\}$ between any two distinct vertices $u$ and $v$ of $DCell_k$ ($k \ge 0$) where $n$ is the vertex number of $DCell_0$ and $r=n+k-1$. The result is optimal because of any vertex in $DCell_k$ has $r$ neighbors with $r=n+k-1$. According to our result, the method in [8,9] can be used to decrease deadlock and congestion in multi-cast routing in DCell. \ No newline at end of file diff --git a/samples/texts/142615/page_2.md b/samples/texts/142615/page_2.md new file mode 100644 index 0000000000000000000000000000000000000000..e24442d2386a94714cda33df0ca6c2414f0187b2 --- /dev/null +++ b/samples/texts/142615/page_2.md @@ -0,0 +1,41 @@ +Acknowledgments + +This work is supported by National Natural Science Foundation of China (61170021), +Specialized Research Fund for the Doctoral Program of Higher Education (20103201110018), +Application Foundation Research of Suzhou of China (SYG201240), and Grad- +uate Training Excellence Program Project of Soochow University (58320235). + +References + +1. S.Ghemawat, H.Gobioff, S.Leung: The Google file system. ACM SIGOPS Operating Systems Review. 37(5), 29-43 (2003) + +2. J.Dean, S.Ghemawat: MapReduce: Simplified data processing on large clusters. Communications of the ACM. 51(1), 107-113 (2008) + +3. M.Isard, M.Budiu, Y.Yu, A.Birrell, D.Fetterly: Dryad: distributed data-parallel programs from sequential building blocks. ACM SIGOPS Operating Systems Review, 41(3), 59-72 (2007) + +4. C.Guo, H.Wu, K.Tan, L.Shi, Y.Zhang, S.Lu: Dcell: a scalable and fault-tolerant network structure for data centers. ACM SIGCOMM Computer Communication Review, 38(4), 75-86 (2008) + +5. C.Guo, G.Lu, D.Li, H.Wu, X.Zhang, Y.Shi, C.Tian, Y.Zhang, S.Lu: BCube: a high performance, server-centric network architecture for modular data centers. ACM SIGCOMM Computer Communication Review, 39(4), 63-74 (2009) + +6. D.Li, C.Guo, H.Wu, K.Tan, Y.Zhang, S.Lu: FiConn: Using backup port for server interconnection in data centers. IEEE INFOCOM, 2276-2285 (2009) + +7. D.Li, C.Guo, H.Wu, K.Tan, Y.Zhang, S.Lu, J.Wu: Scalable and cost-effective interconnection of data-center servers using dual server ports. IEEE/ACM Transactions on Networking, 19(1), 102-114 (2011) + +8. X.Lin, P.Philip, L.Ni: Deadlock-free multicast wormhole routing in 2-D mesh multi-computers. IEEE Transactions on Parallel and Distributed Systems, 5(8), 793-804 (1994) + +9. N.Wang, C.Yen, C.Chu: Multicast communication in wormhole-routed symmetric networks with hamiltonian cycle model. Journal of Systems Architecture, 51(3), 165-183 (2005) + +10. C.Lin, H.Huang, L.Hsu: On the spanning connectivity of graphs, Discrete Mathematics. Discrete Mathematics, 307(2):285-289, 2007. + +11. C.Lin, H.Huang, J.Tan, L.Hsu: On spanning connected graphs. Discrete Mathematics, 308(7):1330-1333, 2008. + +12. D.B.West and others: Introduction to graph theory. Prentice hall Englewood Cliffs, 2, (2001) + +13. J.Park, C.Kim, S.Lim: Many-to-many disjoint path covers in hypercube-like interconnection networks with faulty elements. IEEE Transactions on Parallel and Distributed Systems, 17(3), 227-240 (2006) + +14. R.Caha, V.Koubek: Spanning multi-paths in hypercubes. Discrete mathematics, 307(16), 2053-2066 (2007) + +15. X.Chen: Many-to-many disjoint paths in faulty hypercubes. Information Sciences, 179(18), 3110-3115 (2009) + +16. X.Chen: Paired many-to-many disjoint path covers of hypercubes with faulty edges. +Information Processing Letters, 112(3), 61-66 (2012) \ No newline at end of file diff --git a/samples/texts/142615/page_3.md b/samples/texts/142615/page_3.md new file mode 100644 index 0000000000000000000000000000000000000000..9648b0b3529f3e1cd064f6ed12bca07c5f8c6d25 --- /dev/null +++ b/samples/texts/142615/page_3.md @@ -0,0 +1,13 @@ +17. J.Park, H.Kim, H.Lim: Many-to-many disjoint path covers in the presence of faulty elements. IEEE Transactions on Computers, 58(4), 528-540 (2009) + +18. M.Ma: The spanning connectivity of folded hypercubes. Information Sciences, 180(17), 3373-3379 (2010) + +19. Y.Shih, S.Kao: One-to-one disjoint path covers on k-ary n-cubes. Theoretical Computer Science, 412(35), 4513-4530 (2011) + +20. H.Hsu, C.Lin, H.Hung, L.Hsu: The spanning connectivity of the (n,k)-star graphs. International Journal of Foundations of Computer Science, 17(2), 415-434 (2006) + +21. X.Chen: Unpaired many-to-many vertex-disjoint path covers of a class of bipartite graphs. Information processing letters, 110(6), 203-205 (2010) + +22. P.Huanga, L.Hsub: The spanning connectivity of line graphs. Applied Mathematics Letters, 24(9), 1614-1617 (2011) + +23. M.Kliegl, J.Lee, J.Li, X.Zhang, C.Guo, D.Rincon: Generalized DCell structure for load-balanced data center networks. IEEE INFOCOM Conference on Computer Communications Workshops, 1-5 (2010) \ No newline at end of file diff --git a/samples/texts/142615/page_4.md b/samples/texts/142615/page_4.md new file mode 100644 index 0000000000000000000000000000000000000000..927cf0673690efdd2c16fb22d37670a37f0cc50c --- /dev/null +++ b/samples/texts/142615/page_4.md @@ -0,0 +1,21 @@ +# One-to-One Disjoint Path Covers in DCell + +Xi Wang, Jianxi Fan*, Baolei Cheng, Wenjun Liu, and Yan Wang + +School of Computer Science and Technology, Soochow University, Suzhou 215006, +China +{20124027002, jxfan, chengbaolei, 20114027003, wangyanme}@suda.edu.cn + +**Abstract.** DCell has been proposed for one of the most important data center networks as a server centric data center network structure. DCell can support millions of servers with outstanding network capacity and provide good fault tolerance by only using commodity switches. In this paper, we prove that there exist $r$ vertex disjoint paths $\{P_i|1 \le i \le r\}$ between any two distinct vertices $u$ and $v$ of $DCell_k$ ($k \ge 0$) where $r = n+k-1$ and $n$ is the vertex number of $DCell_0$. The result is optimal because of any vertex in $DCell_k$ has $r$ neighbors with $r = n + k - 1$. + +**Keywords:** DCell, Data Center Network, Disjoint Path Covers, Hamiltonian + +## 1 Introduction + +Data centers become more and more important with the development of cloud computing. Specifically, in recent years, data centers are critical to the business of companies such as Amazon, Google, Facebook, and Microsoft, which have already owned tremendous data centers with more than hundreds of thousands of servers. Their operations are important to offer both many on-line applications such as web search, on-line gaming, email, cloud disk and infrastructure services such as GFS [1], Map-reduce [2], and Dryad [3]. + +Researches showed that the traditional tree-based data center networks [4] have issues of bandwidth bottleneck, failure of single switch, etc.. In order to solve the defects of tree-based data center networks, there are many data center networks which have been proposed such as DCell [4], BCube [5], and FiConn [6, 7]. DCell has many good properties including exponential scalability, high network capacity, small diameter, and high fault tolerantly. In comparison with good capabilities of DCell, BCube is meant for container-based data center networks which only supports thousands of servers, and FiConn is not a regularly network which may raises the construction complexity. + +DCells use servers as routing and forwarding infrastructure, and the multi-cast routing frequency between servers are quite high in data center networks. Multi-cast routing algorithms in DCells can be based on the Hamiltonian model as methods on [8, 9]. One-to-one disjoint path covers (also named spanning connectivity [10, 11]) are the extension of the Hamiltonian-connectivity which could + +* Corresponding author. \ No newline at end of file diff --git a/samples/texts/142615/page_5.md b/samples/texts/142615/page_5.md new file mode 100644 index 0000000000000000000000000000000000000000..3d27db8b697d6c432ecf9bff50f8725752f0db59 --- /dev/null +++ b/samples/texts/142615/page_5.md @@ -0,0 +1,24 @@ +as well as used on multi-cast routing algorithms in DCells to largely decrease +deadlock and congestion, compared with tree-based multi-cast routing. How- +ever, the problem of finding disjoint path covers is NP-complete [13]. Therefore, +a large amount researches on problems of disjoint path covers focused on dif- +ferent special networks, such as hypercubes [13–16], their variants [17–19], and +others [20–22]. + +So far there is no work reported about the one-to-one disjoint path cover +properties of DCell. In this paper, we prove that there exist $r$ vertex disjoint +paths $\{P_i|1 \le i \le r\}$ between any two distinct vertices $u$ and $v$ of $DCell_k$ +$(k \ge 0)$ where $n$ is the vertex number of $DCell_0$ and $r = n + k - 1$. The result +is optimal because of any vertex in $DCell_k$ has $r$ neighbors with $r = n + k - 1$. + +This work is organized as follows. Section 2 provides the preliminary knowl- +edge. Some basic one-to-one disjoint path covers properties are given in Section +3. We make a conclusion in Section 4. + +## 2 Preliminaries + +A data center network can be represented by a simple graph $G = (V(G), E(G))$, where $V(G)$ represents the vertex set and $E(G)$ represents the edge set, and each vertex represents a server and each edge represents a link between servers (switches can be regarded as transparent network devices [4]). The edge from vertex $u$ to vertex $v$ is denoted by $(u, v)$. In this paper all graphs are simple and undirected. + +We use $G_1 \cup G_2$ to denote the subgraph induced by $V(G_1) \cup V(G_2)$ of $G$. For $U \subseteq V(G)$, we use $G[U]$ to denote the subgraph induced by $U$ in $G$, i.e., $G[U] = (U, E')$, where $E' = \{(u, v) \in E(G) | u, v \in U\}$. A path in a graph is a sequence of vertices, $P: $, in which no vertices are repeated and $u_j, u_{j+1}$ are adjacent for $0 \le j < n$. Let $V(P)$ denote the set of all vertices appearing in $P$. We call $u_0$ and $u_n$ the terminal vertices of $P$. $P$ can be denoted by $P(u_0, u_n)$, which is a path beginning with $u_0$ and ending at $u_n$. Let $P_1$ denote $$ and $P_2$ denote $$, then $P_1 + P_2$ denotes the path $$. If $e = (u_k, u_{k+1})$, then $P_1 + e$ denote the path $$. Furthermore, if $e = (u_{k-1}, u_k)$, $P_1 - e$ denote the path $$. + +A path in a graph $G$ containing every vertex of $G$ is called a Hamiltonian path ($HP$). $HP(u,v,G)$ can be denoted by a Hamiltonian path beginning with a vertex $u$ and ending with another vertex $v$ in graph $G$. Obviously, if $(v,u) \in E(G)$, then $HP(u,v,G) + (v,u)$ is a Hamiltonian cycle in $G$. A Hamiltonian graph is a graph containing a Hamiltonian cycle. $G$ is called a Hamiltonian-connected graph if there exists a Hamiltonian path between any two different vertices of $G$. Obviously, if $G$ is a Hamiltonian-connected graph, then $G$ must be the Hamiltonian graph. Suppose that $u$ and $v$ are two vertices of a graph $G$. We say a set of $r$ paths between $u$ and $v$ is an $r$-disjoint path cover in $G$ if the $r$ paths do not contain the same vertex besides $u$ and $v$ and their union covers all vertices of $G$. An $r$-disjoint path cover is abbreviated as an $r$-DPC for simplicity. \ No newline at end of file diff --git a/samples/texts/142615/page_6.md b/samples/texts/142615/page_6.md new file mode 100644 index 0000000000000000000000000000000000000000..e422d38827c74e55c643719bc5635000f17f3fef --- /dev/null +++ b/samples/texts/142615/page_6.md @@ -0,0 +1,61 @@ +A graph $G$ is one-to-one $r$-disjoint path coverable ($r$-DPC-able for short) if there +is an $r$-DPC between any two vertices of $G$. In this paper $G$ is $r$-DPC-able is +not same as $G$ is $(r+1)$-DPC-able. + +For any other fundamental graph theoretical terminology, please refer to [12]. + +DCell uses recursively-defined structure to interconnect servers. Each server +connects to different levels of DCell through multiple links. We build high-level +DCell recursively form many low-level ones. Due to this structure, DCell uses +only mini-switches to scale out instead of using high-end switches to scale up, +and it scales doubly exponentially with server vertex degree. + +We use $\textit{DCell}_k$ to denote a $k$-dimension DCell ($k \ge 0$), $\textit{DCell}_0$ is a complete graph on $n$ vertices ($n \ge 2$). Let $t_0$ denote the number of vertices in a $\textit{DCell}_0$, where $t_0 = n$. Let $t_k$ denote the number of vertices in a $\textit{DCell}_k$ ($k \ge 1$), where $t_k = t_{k-1} \times (t_{k-1} + 1)$. The vertex of $\textit{DCell}_k$ can be labeled by $[\alpha_k, \alpha_{k-1}, \dots, \alpha_i, \dots, \alpha_0]$, where $\alpha_i \in \{0, 1, \dots, t_{i-1}\}, i \in \{1, 2, \dots, k\}$, and $\alpha_0 \in \{0, 1, \dots, t_0 - 1\}$. According to the definition of $\textit{DCell}_k$ [4, 23], we provide the recursive definition as Definition 1. + +**Definition 1.** The *k*-dimensional DCell, *DCell**k*, is defined recursively as +follows. + +(1) $\mathrm{DCell}_0$ is a complete graph consisting of $n$ vertices labeled with $[0],[1],\dots,[n-1]$. + +(2) For any $k \ge 1$, $\mathrm{DCell}_k$ is built from $t_{k-1} + 1$ disjoint copies $\mathrm{DCell}_{k-1}$, according to the following steps. + +(2.1) Let $\mathcal{DCell}_{k-1}^0$ denote the graph obtained by prefixing the label of each vertex of one copy of $\mathcal{DCell}_{k-1}$ with 0. Let $\mathcal{DCell}_{k-1}^1$ denote the graph obtained by prefixing the label of each vertex of one copy of $\mathcal{DCell}_{k-1}$ with 1. ... . Let $\mathcal{DCell}_{k-1}^{t_{k-1}}$ denote the graph obtained by prefixing the label of each vertex of one copy of $\mathcal{DCell}_{k-1}$ with $t_{k-1}$. Clearly, $\mathcal{DCell}_{k-1}^0 \cong \mathcal{DCell}_{k-1}^1 \cong \cdots \cong \mathcal{DCell}_{k-1}^{t_{k-1}}$. + +(2.2) For any $\alpha_k, \beta_k \in \{0, 1, \dots, t_{k-1}\}$ and $\alpha_k \ge \beta_k (\text{resp. } \alpha_k < \beta_k)$, connecting the vertex $[\alpha_k, \alpha_{k-1}, \dots, \alpha_i, \dots, \alpha_1, \alpha_0]$ of $\mathcal{DCell}_{k-1}^{\alpha_k}$ with the vertex $[\beta_k, \beta_{k-1}, \dots, \beta_i, \dots, \beta_1, \beta_0]$ of $\mathcal{DCell}_{k-1}^{\beta_k}$ as follow: + +$$ +\left\{ +\begin{aligned} +\alpha_k &= \beta_0 + \sum_{j=1}^{k-1} (\beta_j \times t_{j-1}) + 1 \\ +\beta_k &= \alpha_0 + \sum_{j=1}^{k-1} (\alpha_j \times t_{j-1}) +\end{aligned} +\right. +\qquad (1) +$$ + +``` +``` + +``` +``` + +``` +\[ +\text{(resp.} +\] +\begin{equation} +\left\{ +\begin{aligned} +\alpha_k &= \beta_0 + \sum_{j=1}^{k-1} (\beta_j \times t_{j-1}) \\ +\beta_k &= \alpha_0 + \sum_{j=1}^{k-1} (\alpha_j \times t_{j-1}) + 1 +\end{aligned} +\right. +\tag{2} +\end{equation} +``` + +), where $\alpha_i, \beta_i \in \{0, 1, \dots, t_{i-1}\}, i \in \{1, 2, \dots, k\}$, and $\alpha_0, \beta_0 \in \{0, 1, \dots, t_0 - 1\}$. + +$$ +\text{(resp.} +$$ \ No newline at end of file diff --git a/samples/texts/142615/page_7.md b/samples/texts/142615/page_7.md new file mode 100644 index 0000000000000000000000000000000000000000..7224467cc911523480243da340fbfb1b2da91f9a --- /dev/null +++ b/samples/texts/142615/page_7.md @@ -0,0 +1,58 @@ +By Definition 1, $DCell_{k-1}^{\alpha_k}$ is a subgraph of $DCell_k$, where $\alpha_k \in \{0, 1, \dots, t_{k-1}\}$. +Figure 1(1), 1(2), and 1(3) demonstrate $DCell_0$, $DCell_1$, and $DCell_2$ with +$t_0 = 2$ respectively. 1(4) and 1(5) demonstrate $DCell_0$ and $DCell_1$ with $t_0 = 3$ +respectively. + +**3 Main results** + +In this section, we will study one-to-one disjoint path cover properties of DCell. + +**Theorem 1.** $DCell_k$ ($k \ge 0$) is Hamiltonian-connected with $t_0 \ge 2$ except for $DCell_1$ with $t_0 = 2$. In other word, $DCell_k$ ($k \ge 0$) is 1-DPC-able with $t_0 \ge 2$ except for $DCell_1$ with $t_0 = 2$. + +*Proof.* We omit the proof due to the page limitation. $\square$ + +**Theorem 2.** $DCell_k$ is a Hamiltonian graph for any $k \ge 0$. In other word, $DCell_k$ is 2-DPC-able for any $k \ge 0$. + +*Proof.* We omit the proof due to the page limitation. $\square$ + +**Lemma 1.** $DCell_0$ is $(t_0 - 1)$-DPC-able with $t_0 \ge 2$. + +*Proof.* The lemma holds for $DCell_0$ which is a complete graph [12]. $\square$ + +**Lemma 2.** $DCell_1$ is 2-DPC-able with $t_0 = 2$. + +*Proof.* $DCell_1$ is a cycle with 6 vertices. Therefore, $DCell_1$ is 2-DPC with $t_0 = 2$ [12]. $\square$ + +**Lemma 3.** $DCell_2$ is 3-DPC-able with $t_0 = 2$. + +*Proof.* For $t_0 = 2$, we use construction method to proof this lemma. We can construct an 3-DPC between *u* and *v* in $DCell_2$ for any pair of vertices $\{u, v\} \in V(DCell_2)$. + +For example, the 3-DPC $\{P_1, P_2, P_3\}$ (resp. $\{R_1, R_2, R_3\}$, $\{T_1, T_2, T_3\}$, $\{S_1, S_2, S_3\}$, $\{U_1, U_2, U_3\}$) from $[0, 0, 0]$ to $[0, 0, 1]$ (resp. $[0, 1, 0]$, $[0, 1, 1]$, $[0, 2, 0]$, $[0, 2, 1]$) whose union covers $V(DCell_2)$ with $t_0 = 2$ are listed below (Similarly for the other cases). + +$$P_1 =< [0, 0, 0], [0, 0, 1] >,$$ + +$$P_2 =< [0, 0, 0], [0, 1, 0], [0, 1, 1], [0, 2, 1], [0, 2, 0], [0, 0, 1] >,$$ + +$$P_3 =< [0, 0, 0], [1, 0, 0], [1, 0, 1], [2, 0, 1], [2, 2, 0], [2, 2, 1], [2, 1, 1], [4, 1, 0], \\ +[4, 0, 0], [4, 0, 1], [4, 2, 0], [5, 2, 0], [5, 2, 1], [6, 2, 1], [6, 1, 1], [6, 1, 0], [6, 0, 0], [6, 0, 1], \\ +[6, 2, 0], [4, 2, 1], [4, 1, 1], [3, 1, 1], [3, 2, 1], [3, 2, 0], [5, 1, 1], [5, 1, 0], [5, 0, 0], [5, 0, 1], \\ +[1, 2, 0], [1, 2, 1], [1, 1, 1], [1, 1, 0], [3, 0, 1], [3, 0, 0], [3, 1, 0], [2, 1, 0], [2, 0, 0], [0, 0, 1] >.$$ + +$$R_1 =< [0, 0, 0], [0, 1, 0] >,$$ + +$$R_2 =< [0, 0, 0], [0, 0, 1], [0, 2, 0], [0, 2, 1], [0, 1, 1], [0, 1, 0] >,$$ + +$$R_3 =< [0, 0, 0], [1, 0, 0], [1, 0, 1], [1, 2, 0], [1, 2, 1], [1, 1, 1], [1, 1, 0], [3, 0, 1], \\ +[3, 2, 0], [3, 2, 1], [3, 1, 1], [4, 1, 1], [4, 2, 1], [6, 2, 0], [6, 0, 1], [6, 0, 0], [6, 1, 0], [6, 1, 1], \\ +[6, 2, 1], [5, 2, 1], [5, 1, 1], [5, 1, 0], [5, 0, 0], [5, 0, 1], [5, 2, 0], [4, 2, 0], [4, 0, 1], [4, 0, 0], \\ +[4, 1, 0], [2, 1, 1], [2, 2, 1], [2, 2, 0], [2, 0, 1], [2, 0, 0], [2, 1, 0], [3, 1, 0], [3, 0, 0], [0, 1, 0] >.$$ + +$$T_1 =< [0, 0, 0], [0, 1, 0], [0, 1, 1] >,$$ + +$$T_2 =< [0, 0, 0], [0, 0, 1], [0, 2, 0], [0, 2, 1], [0, 1, 1] >,$$ + +$$T_3 =< [0, 0, 0], [1, 0, 0], [1, 0, 1], [1, 2, 0], [1, 2, 1], [1, 1, 1], [1, 1, 0], [3, 0, 1], \\ +[3{.}{.} {o} {o}], {[} {o} {r} {l} {a} {n} {g} {o}], {[} {o} {r} {l} {a} {n} {g} {o}], {[} {o} {r} {l} {a} {n} {g} {o}], {[} {o} {r} {l} {a} {n} {g} {o}], {[} {o} {r} {l} {a} {n} {g} {o}], \\ +{[} {o} {r} {l} {a} {n} {g} {o}], {[} {o} {r} {l} {a} {n} {g} {o}], {[} {o} {r} {l} {a} {n} {g} {o}], {[} {o} {r} {l} {a} {n} {g} {o}], {[} {o} {r} {l} {a} {n} {g} {o}], {[} {o} {r} {l} {a} {n} {g} {o}], \\ +{[} {o} {r} {l} {a} {n} {g} {o}], {[} {o} {r} {l} {a} {n} {g} {o}], {[} {o} {r} {l} {a} {n} {g} {o}], {[} {o} {r} {l} {a} {n} {g} {o}], \\ +{[} {\bar{o}} {{\bar{o}}} {{\bar{o}}} {{\bar{o}}} {{\bar{o}}} ]$$ \ No newline at end of file diff --git a/samples/texts/142615/page_8.md b/samples/texts/142615/page_8.md new file mode 100644 index 0000000000000000000000000000000000000000..7a7d2ea66536a9adbcfb8c423ab256cf84e06c03 --- /dev/null +++ b/samples/texts/142615/page_8.md @@ -0,0 +1 @@ +**Fig. 1.** (1), (2), and (3) demonstrate $DCell_0$, $DCell_1$, and $DCell_2$ with $t_0 = 2$ respectively. (4) and (5) demonstrate $DCell_0$ and $DCell_1$ with $t_0 = 3$ respectively. \ No newline at end of file diff --git a/samples/texts/142615/page_9.md b/samples/texts/142615/page_9.md new file mode 100644 index 0000000000000000000000000000000000000000..515d56194606906ec65e6c6f97cfdbf07b5bd338 --- /dev/null +++ b/samples/texts/142615/page_9.md @@ -0,0 +1,45 @@ +$[6, 1, 1], [6, 1, 0], [6, 0, 0], [6, 0, 1], [6, 2, 0], [4, 2, 1], [4, 2, 0], [4, 0, 1], [4, 0, 0], [0, 1, 1] >$ + +$S_1 = <[0, 0, 0], [0, 0, 1], [0, 2, 0]>$, + +$S_2 = <[0, 0, 0], [0, 1, 0], [0, 1, 1], [0, 2, 1], [0, 2, 0]>,$ + +$S_3 = <[0, 0, 0], [1, 0, 0], [1, 0, 1], [1, 2, 0], [1, 2, 1], [1, 1, 1], [1, 1, 0], [3, 0, 1], [3, 0, 0], [3, 1, 0], [3, 1, 1], [4, 1, 1], [4, 1, 0], [4, 0, 0], [4, 0, 1], [4, 2, 0], [4, 2, 1], [6, 2, 0], [6, 0, 1], [6, 0, 0], [6, 1, 0], [2, 2, 1], [2, 1, 1], [2, 1, 0], [2, 0, 0], [2, 0, 1], [2, 2, 0], [5, 1, 0], [5, 1, 1], [3, 2, 0], [3, 2, 1], [6, 1, 1], [6, 2, 1], [5, 2, 1], [5, 2, 0], [5, 0, 1], [5, 0, 0], [0, 2, 0]>. + +$U_1 = <[0, 0, 0], [0, 0, 1], [0, 2, 0], [0, 2, 1]>,$ + +$U_2 = <[0, 0, 0], [0, 1, 0], [0, 1, 1], [0, 2, 1]>,$ + +$U_3 = <[0, 0, 0], [1, 0, 0], [1, 0, 1], [1, 2, 0], [1, 2, 1], [1, 1, 1], [1, 1, 0], [3, 0, 1], [3, 0, 0], [3, 1, 0], [2, 1, 0], [2, 0, 0], [2, 0, 1], [2, 2, 0], [5, 1, 0], [5, 0, 0], [5, 0, 1], [5, 2, 0], [4, 2, 0], [4, 0, 1], [4, 0, 0], [4, 1, 0], [2, 1, 1], [2, 2, 1], [6, 1, 0], [6, 1, 1], [6, 2, 1], [5, 2, 1], [5, 1, 1], [3, 2, 0], [3, 2, 1], [3, 1, 1], [4, 1, 1], [4, 2, 1], [6, 2, 0], [6, 0, 1], [6, 0, 0], [0, 2, 1]>$. + +□ + +**Lemma** **4.** For any $\alpha,\beta \in \{0,1,\cdots,t_k\}$ , $m \in \{1,2,\cdots,t_k-3\}$ , and $\alpha \neq \beta$ , +let $x \in V(DCell_k^\alpha)$ be an arbitrary white vertex , $y \in V(DCell_k^\beta)$ be an arbitrary +black vertex , and $G_0 = DCell_k^\alpha \cup DCell_k^\beta \cup (\bigcup_{\theta=0}^m DCell_k^{\omega_\theta})$, where $DCell_k^\alpha$, +$DCell_k^\beta$, $DCell_k^{\omega_0}$ , ..., $DCell_k^{\omega_i}$ , ..., $DCell_k^{\omega_m}$ are internally vertex-independent +with $i \in \{0,1,\cdots,m\}$ and $\omega_i \in \{0,1,\cdots,t_k\}$. Then there exists a path between +$x$ and $y$ that containing every vertex in $DCell_k[V(G_0)]$ where $k \geq 1$ and $t_0 = 2$. + +*Proof.* Let $G_1 = DCell_k^\alpha \cup DCell_k^\beta$. Select $z \in V(DCell_k^\alpha)$ and $u \in V(DCell_k^\gamma)$, +such that $z \neq x$, $(u,z) \in E(DCell_k)$, and $DCell_k^\gamma \subseteq G_0$, where two graphs +$G_1$ and $DCell_k^\gamma$ are internally vertex-independent. Select $\omega \in V(DCell_k^\beta)$ and +$v \in V(DCell_k^\delta)$, such that $\omega \neq y$, $(\omega,v) \in E(DCell_k)$, and $DCell_k^\delta \subseteq G_0$ where +three graphs $G_1$, $DCell_k^\gamma$, and $DCell_k^\delta$ are internally vertex-independent. Ac- +cording to Theorem **1**, there exists a path $P$ from $x$ to $z$ that containing every +vertex in $DCell_k^\alpha$ and a path $Q$ from $\omega$ to $y$ that containing every vertex in +$DCell_k^\beta$. Let $G_2 = G_0[V(\bigcup_{\theta=0}^m DCell_k^{\omega_\theta})]$. We can construct a path $S$ from $u$ to +$v$ that containing every vertex in $G_2$ which is similar to Theorem **1**. Then there +exists a path $P + (z,u) + S + (v,\omega) + Q$ between $x$ and $y$ that containing every +vertex in $DCell_k[V(G_0)]$ where $k \geq 1$ and $t_0 = 2$. $\square$ + +**Lemma** **5.** *DCell**k* is (*k* + 1)-DPC-able with *k* ≥ 2 and *t*₀ = *2*. + +*Proof.* We will prove this lemma by induction on the dimension *k* of DCell. +By lemma **3**, the lemma holds for *t*₀ = *2* and *k* = *2*. For *t*₀ = *2*, supposing +that the lemma holds for *k* = *τ* (*τ* ≥ *2*), we will prove that the lemma holds for +*k* = *τ* + *1*. + +For any vertex $x,y \in V(DCell_{\tau+1})$ with $x \neq y$. Let $x \in V(DCell_{\tau}^{\alpha})$ and $y \in V(DCell_{\tau}^{\beta})$ with $\alpha,\beta \in \{0,1,\dots,t_{\tau}\}$. We can identify $\alpha$ and $\beta$ as follows. + +Case **1.** $\alpha = \beta$. There exist $(\tau + 1)$ vertex disjoint paths $\{P_i|1 \le i \le \tau + 1\}$ between any two distinct vertices $x$ and $y$ of $DCell_{\tau}^{\alpha}$. Select $u \in V(DCell_{\tau}^{\gamma})$ and $v \in V(DCell_{\tau}^{\delta})$, such that $(x,u), (y,v) \in E(DCell_{\tau+1})$, where three graphs $DCell_{\tau}^{\alpha}$,$DCell_{\tau}^{\gamma}$,$$DCell_{\tau}^{\delta}$ are internally vertex-independent. According to Lemma **4**, there exists a path $P_{\tau+2}$ from $u$ to $v$ that visits every vertex in $DCell_{\tau+1}[V(DCell_{\tau+1}-DCell_{\tau}^{\alpha})]$. Then there exist $(\tau+2)$ vertex disjoint paths $\{P_i|1 \le i \le \tau + 2\}$ between any two distinct vertices $x$ and $y$ of $DCell_{\tau+1}$. \ No newline at end of file diff --git a/samples/texts/2395852/page_1.md b/samples/texts/2395852/page_1.md new file mode 100644 index 0000000000000000000000000000000000000000..b4d0dac726b0765d5d737c755c10d104c7b87dc9 --- /dev/null +++ b/samples/texts/2395852/page_1.md @@ -0,0 +1,19 @@ +Fachbereich Informatik der Universität Hamburg + +Vogt-Kölln-Str. 30 ◊ D-22527 Hamburg / Germany + +University of Hamburg - Computer Science Department + +Mitteilung Nr. 297/00 • Memo No. 297/00 + +# Obstacles on the Way to Spatial Reasoning with Description Logics: Undecidability of $\mathcal{ALC}_{RA}\ominus$ + +(Slightly Revised Version) + +Michael Wessel + +Arbeitsbereich KOGS + +FBI-HH-M-297/00 + +Oktober 2000 \ No newline at end of file diff --git a/samples/texts/2395852/page_10.md b/samples/texts/2395852/page_10.md new file mode 100644 index 0000000000000000000000000000000000000000..9ec9e32a233cf1d1cd5ca2251496602425d8a300 --- /dev/null +++ b/samples/texts/2395852/page_10.md @@ -0,0 +1,29 @@ +**Definition 11 (Concatenation Grammar)** A context-free grammar $G = (\mathcal{V}, \Sigma, \mathcal{P}, S)$ is called a *concatenation grammar* iff $\mathcal{P} \subseteq \mathcal{V} \times ((\mathcal{V} \cup \Sigma) \times (\mathcal{V} \cup \Sigma))$. $\square$ + +We say that a language is a concatenation language iff it has a generating +concatenation grammar. For example, the language $\{a, b\}$ is not a concatenation +language. The language $\{a^n b^n \mid n \ge 1\}$ is a concatenation language, since it is +generated by the grammar $(\{S, X\}, \{a, b\}, \{S \to a b, S \to a X, X \to S b\}, S)$. + +**Lemma 1** The intersection problem for concatenation languages is undecidable. +$\square$ + +**Proof 1** (Thanks to Harald Ganzinger who has suggested this proof) Let $G_1 = (\mathcal{V}_1, \Sigma_1, \mathcal{P}_1, S_1)$ and $G_2 = (\mathcal{V}_2, \Sigma_2, \mathcal{P}_2, S_2)$ be two arbitrary context-free grammars in Chomsky Normal Form.⁴ Let $\#\notin \mathcal{V}_1 \cup \mathcal{V}_2 \cup \Sigma_1 \cup \Sigma_2$ be a new terminal symbol, for $i \in \{1, 2\}: \Sigma'_i =_{def} \Sigma_i \cup \{\#\}, \mathcal{P}'_i =_{def} \{A \to B C \mid A \to B C \in \mathcal{P}_i\} \cup \{A \to a\# \mid A \to a \in \mathcal{P}_i\}$, and $G'_i = (\mathcal{V}_i, \Sigma'_i, \mathcal{P}'_i, S_i)$. + +Then, $G'_1 \cap G'_2 = \emptyset$ iff $G_1 \cap G_2 = \emptyset$. Since the latter is an undecidable problem for context-free grammars (e.g. see [21]), the former is undecidable as well. $\square$ + +Given an arbitrary concatenation grammar, the key-observation is now that one +can simply reverse the productions $\mathcal{P}$ of the grammar and get a role box $\mathfrak{R}$. If +a word can be derived “top down” by the grammar using a derivation tree, then +it is possible to “parse” this word in a bottom-up style using the role axioms. +The following Lemma fixes the relationship between words that are derivable by +a concatenation grammar and the models of the role box corresponding to this +grammar: + +**Lemma 2** Let $\mathcal{G} = (\mathcal{V}, \Sigma, \mathcal{P}, S)$ be an arbitrary concatenation grammar. Let $w = w_1...w_n$ be a word, $w \in \Sigma^+$ with $|w| \ge 2$, and $\mathcal{I}$ be a model of $(\exists w_1...\exists w_n.\top,\mathfrak{R})$ with $\mathfrak{R}=_{def} \{B \circ C \sqsubseteq A \mid A \to B C \in \mathcal{P}\}$. Let $\langle x_0, x_1 \rangle \in w_1^\mathcal{I}, ... \langle x_{n-1}, x_n \rangle \in w_n^\mathcal{I}$ be an arbitrary path in the model $\mathcal{I}$ corresponding to $w$. + +Let $V \in V$ be an arbitrary non-terminal of $\mathcal{G}$. Then, $\langle x_0, x_n \rangle \in V^\mathcal{I}$ holds in all models $\mathcal{I}$ of $(\exists w_1...\exists w_n.\top,\mathfrak{R})$ iff there is a derivation of $w$ having $V$ as the root node: we write $V \xrightarrow{\cdot} w$. As a consequence, $\langle x_0, x_n \rangle \in S^\mathcal{I}$ in all models $\mathcal{I}$ of $(\exists w_1...\exists w_n.\top,\mathfrak{R})$ iff $w \in L(\mathcal{G})$. $\square$ + +**Proof 2** "⇔" This can be shown using induction over the length of $w$. + +⁴A context-free grammar $G = (\mathcal{V}, \Sigma, \mathcal{P}, S)$ is in Chomsky Normal Form, iff $\mathcal{P} \subseteq \mathcal{V} \times ((\mathcal{V} \times \mathcal{V}) \cup \Sigma))$ (see [21]). \ No newline at end of file diff --git a/samples/texts/2395852/page_11.md b/samples/texts/2395852/page_11.md new file mode 100644 index 0000000000000000000000000000000000000000..589171e396f2750441f5aeaf74d7c33aec53e9f3 --- /dev/null +++ b/samples/texts/2395852/page_11.md @@ -0,0 +1,19 @@ +• If $|w| = 2$, $w = w_1w_2$, and $V \xrightarrow{+} w$, then there must be a production of the form $V \to w_1w_2 \in \mathcal{P}$. Note that there cannot be productions of the form $V \to w_1B$, $V \to Aw_2$, $V \to AB$, since $\mathcal{G}$ is a concatenation grammar – we would additionally need productions of the form $A \to w_1\ldots$, or even productions of the form $A \to \epsilon$. If $\mathcal{I}$ is a model of $\mathfrak{R}$ and $\langle x_0, x_1 \rangle \in w_1^\mathcal{I}$, $\langle x_1, x_2 \rangle \in w_2^\mathcal{I}$, then, due to $w_1 \circ w_2 \subseteq V \in \mathfrak{R}$ we have $\langle x_0, x_2 \rangle \in V^\mathcal{I}$ in every model $\mathcal{I}$. + +• Let $w = w_1 \dots w_n$, $n \ge 3$. Let $V \xrightarrow{+} w$. Since $\mathcal{G}$ is a concatenation grammar, there must be a production of the form $V \to XY \in \mathcal{P}$, and the following cases can occur: + +1. $X \in \mathcal{V}$, $Y \in \mathcal{\Sigma}$: then, there is a derivation $X \xrightarrow{+} w_1 \dots w_{n-1}$, and $Y = w_n$. Due to the induction hypothesis we have $\langle x_0, x_{n-1} \rangle \in X^{\mathcal{I}}$ in every model $\mathcal{I}$. Since we consider a model of $(\exists w_1...\exists w_{n-1}.\exists w_n.\top, \mathfrak{R})$, with $\langle x_{n-1}, x_n \rangle \in w_n^{\mathcal{I}}$, we have $\langle x_0, x_n \rangle \in V^{\mathcal{I}}$, because $\mathcal{I}$ is a model of $\mathfrak{R}$ with $X \circ w_n \subseteq V \in \mathfrak{R}$. + +2. $X \in \mathcal{\Sigma}$, $Y \in \mathcal{V}$: same argumentation. + +3. $X \in \mathcal{V}$, $Y \in \mathcal{V}$: let $w = uv$ be the partition of $w$ corresponding to the derivations $X \xrightarrow{+} u$, $Y \xrightarrow{+} v$. Let $u = w_1\dots w_i$, $v = w_{i+1}\dots w_n$. Due to the induction hypothesis we have $\langle x_0, x_i \rangle \in X^{\mathcal{I}}$ and $\langle x_{i+1}, x_n \rangle \in Y^{\mathcal{I}}$, since both $u$ and $v$ have a length smaller than $n$. We have $X \circ Y \subseteq V \in \mathfrak{R}$. This shows that $\langle x_0, x_n \rangle \in V^{\mathcal{I}}$. + +Summing up we have shown that $\langle x_0, x_n \rangle \in V^{\mathcal{I}}$ holds in all models $\mathcal{I}$ of $(\exists w_1...\exists w_n.\top, \mathfrak{R})$, if $V \xrightarrow{+} w$. + +“⇒” If $\langle x_0, x_n \rangle \in V^\mathcal{I}$ holds in all models $\mathcal{I}$ of $(\exists w_1...\exists w_n.\top, \mathfrak{R})$, then the presence of $\langle x_0, x_n \rangle \in V^\mathcal{I}$ is a logical consequence of $(\exists w_1...\exists w_n.\top, \mathfrak{R})$. Therefore, $\langle x_0, x_n \rangle \in V^\mathcal{I}$ is enforced by the role axioms in $\mathfrak{R}$. One can easily construct a derivation tree for $w$, showing that $V \xrightarrow{+} w$, by inspecting one of these models. More formally this could be shown using induction as well, and the proof would be very similar to the previous one. + +□ + +Since we are trying to reduce the intersection problem of concatenation grammars to the satisfiability problem of $\mathcal{ALC}_{\mathbb{R}\mathcal{A}}^\ominus$, we have to deal with two grammars. Please note that concatenation grammars are not closed under intersection (i.e. for two grammars $\mathcal{G}_1$ and $\mathcal{G}_2$ there is in general no concatenation grammar $\mathcal{G}_{1,2}$ such that $\mathcal{L}(\mathcal{G}_{1,2}) = \mathcal{L}(\mathcal{G}_1) \cap \mathcal{L}(\mathcal{G}_2)$). In order to deal with this problem we have two put two concatenation grammars into one role box: + +**Lemma 3** Let $\mathcal{G}_1 = (\nu_1, \Sigma_1, P_1, S_1)$ and $\mathcal{G}_2 = (\nu_2, \Sigma_2, P_2, S_2)$ be two arbitrary concatenation grammars. Without loss of generality can assume $\nu_1 \cap \nu_2 = \emptyset$, \ No newline at end of file diff --git a/samples/texts/2395852/page_12.md b/samples/texts/2395852/page_12.md new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/samples/texts/2395852/page_13.md b/samples/texts/2395852/page_13.md new file mode 100644 index 0000000000000000000000000000000000000000..716065bb929a35f04ef537d0056a99f934b676c2 --- /dev/null +++ b/samples/texts/2395852/page_13.md @@ -0,0 +1,58 @@ +since we can always consistently rename the variables in one of the grammars +and get $\mathcal{V}_1 \cap \mathcal{V}_2 = \emptyset$. + +For $i \in \{1, 2\}$, we define $\mathfrak{R}_i =_{def} \{ B \circ C \subseteq A \mid A \to B C \in \mathcal{P}_i \}$. +Let $\Sigma =_{def} \Sigma_1 \cup \Sigma_2$ and $\mathfrak{R} =_{def} \mathfrak{R}_1 \cup \mathfrak{R}_2$. + +Then, for $i \in \{1, 2\}$, $w \in \mathcal{L}(\mathcal{G}_i)$ iff $\langle x_0, x_n \rangle \in S_i^\mathcal{I}$ in all models $\mathcal{I}$ of $(\exists w_1...\exists w_n.\top, \mathfrak{R})$. Obviously, $w \in \mathcal{L}(\mathcal{G}_1) \cap \mathcal{L}(\mathcal{G}_2)$ iff $\langle x_0, x_n \rangle \in S_1^\mathcal{I} \cap S_2^\mathcal{I}$ in all models $\mathcal{I}$ of $(\exists w_1...\exists w_n.\top, \mathfrak{R})$. $\square$ + +**Proof 3** An easy consequence of the previous Lemma and of the requirement that $\mathcal{V}_1 \cap \mathcal{V}_2 = \emptyset$ (the derivation trees do not become "mixed", i.e. each grammar solely uses its own productions). $\square$ + +As an application of this Lemma, let us consider the two grammars + +* $\mathcal{G}_1 = (\{\mathcal{S}_1\}, \{\boldsymbol{a}, \boldsymbol{b}\}, \mathcal{P}_1, S_1)$, where + $\mathcal{P}_1 = \{\boldsymbol{S}_1 \to \boldsymbol{a}\boldsymbol{b} \mid a\boldsymbol{S}_1\boldsymbol{b}\}$, + +* $\mathcal{G}_2 = (\{\mathcal{S}_2\}, \{\boldsymbol{a}, \boldsymbol{b}\}, \mathcal{P}_2, S_2)$, where + $\mathcal{P}_2 = \{\boldsymbol{S}_2 \to \boldsymbol{a}\boldsymbol{a}\boldsymbol{b} \mid aa\boldsymbol{S}_2\boldsymbol{b}\boldsymbol{b}\}$. + +Obviously, $\mathcal{L}(\mathcal{G}_1) = \{\boldsymbol{a}^n\boldsymbol{b}^n \mid n \ge 1\}$ and $\mathcal{L}(\mathcal{G}_2) = \{\boldsymbol{a}^{2n}\boldsymbol{b}^{2n} \mid n \ge 1\}$. Transformed into concatenation grammars we get + +* $\mathcal{G}'_1 = (\{\mathcal{S}_1, A\}, \{\boldsymbol{a}, \boldsymbol{b}\}, \mathcal{P}'_1, S_1)$, where + $\mathcal{P}'_1 = \{\boldsymbol{S}_1 \to \boldsymbol{a}\boldsymbol{b} \mid aA, A \to S_1\boldsymbol{b}\}$, and + +* $\mathcal{G}'_2 = (\{\mathcal{S}_2, B, C, D, E, F\}, \{\boldsymbol{a}, \boldsymbol{b}\}, \mathcal{P}'_2, S_2)$, where + $\mathcal{P}'_2 = \{\begin{aligned}[t] + &\boldsymbol{S}_2 \to aB, B \to aC, C \to bb, \\ + &S_2 \to aD, D \to aE, E \to S_2F, F \to bb + \end{aligned}}$ + +The corresponding role box is + +$$ +\begin{align*} +\mathfrak{R} &= \left\{ + \begin{array}{@{}l@{}} + a \circ b \sqsubseteq S_1, a \circ A \sqsubseteq S_1, S_1 \circ b \sqsubseteq A \\ + a \circ B \sqsubseteq S_2, a \circ C \sqsubseteq B, b \circ b \sqsubseteq C, \\ + a \circ D \sqsubseteq S_2, a \circ E \sqsubseteq D, S_2 \circ F \sqsubseteq E, b \circ b \sqsubseteq F + \end{array} +\right\} \\ +&\cup \\ +&\left\{ + \begin{array}{@{}l@{}} + a^0 b^0 c^0 d^0 e^0 f^0 \\ + a^1 b^1 c^1 d^1 e^1 f^1 \\ + a^2 b^2 c^2 d^2 e^2 f^2 \\ + a^3 b^3 c^3 d^3 e^3 f^3 + \end{array} +\right\}. +\end{align*} +$$ + +The “first part” of this role box corresponds to $\mathcal{P}'_1$, and the “sec- +ond part” to $\mathcal{P}'_2$. The symbols of the grammars correspond to roles +now. Please consider $(\forall S_1.C \sqcap \forall S_2.D \sqcap \exists a.\exists a.\exists b.\exists b.\neg(C \sqcap D), \mathfrak{R})$. Any +model of $(\forall S_1.C \sqcap \forall S_2.D \sqcap \exists a.\exists a.\exists b.\exists b.\neg(C \sqcap D), \mathfrak{R})$ would also be a model of +$(\exists a.\exists a.\exists b.\exists b.\top, \mathfrak{R})$, and must therefore contain $\langle x_0, x_4\rangle \in S_1^\mathcal{I} \cap S_2^\mathcal{I}$, because +$w = aabb \in \mathcal{L}(\mathcal{G}'_1) \cap \mathcal{L}(\mathcal{G}'_2)$, due to Lemma 3. Since $x_0 \in (\forall S_1.C \sqcap \forall S_2.D)^\mathcal{I}$ also diff --git a/samples/texts/2395852/page_14.md b/samples/texts/2395852/page_14.md new file mode 100644 index 0000000000000000000000000000000000000000..e1f8c05a1b319f7f22379e20145811b156a727ef --- /dev/null +++ b/samples/texts/2395852/page_14.md @@ -0,0 +1,33 @@ +Figure 6: “Bottom up parsing” of $aabb \in \mathcal{L}(\mathcal{G}_1) \cap \mathcal{L}(\mathcal{G}_2)$ + +$x_4 \in (C \sqcap D)^I$ must hold, which obviously contradicts $x_4 \in (\neg(C \sqcap D))^I$. The example is therefore unsatisfiable. Considering Figure 6, it can be seen that the role box performs a “bottom up parsing” of the word $aabb$ – the two derivation trees shown in the figure can be immediately discovered as role compositions in the graph. + +We can now prove the main result of this section by showing how to reduce the intersection problem of concatenation grammars to the satisfiability problem of $\mathcal{ALC}_{\mathcal{R}A\ominus}$: + +**Theorem 1** The satisfiability problem of $\mathcal{ALC}_{\mathcal{R}A\ominus}$ is undecidable. $\square$ + +**Proof 4** We give an example for a pair $(E, \mathfrak{R})$ for which no algorithm exists that is capable of checking its satisfiability. + +Let $\mathcal{G}_1 = (\mathcal{V}_1, \Sigma_1, \mathcal{P}_1, S_1)$ and $\mathcal{G}_2 = (\mathcal{V}_2, \Sigma_2, \mathcal{P}_2, S_2)$ be two arbitrary concatenation grammars. Without loss of generality we assume $\mathcal{V}_1 \cap \mathcal{V}_2 = \emptyset$. + +For $i \in \{1, 2\}$, we define $\mathfrak{R}_i =_{def} \{B \circ C \subseteq A \mid A \to B C \in \mathcal{P}_i\}$. +Let $\Sigma =_{def} \Sigma_1 \cup \Sigma_2$ and $\mathfrak{R} =_{def} \mathfrak{R}_1 \cup \mathfrak{R}_2$. Let $R? \notin \text{roles}(\mathfrak{R})$, and let + +$$ +\begin{array}{l} +\mathfrak{R}' =_{def} \mathfrak{R} \cup \{ R \circ S \sqsubseteq R? \mid R, S \in (\{R?\} \cup \text{roles}(\mathfrak{R})), \\ +\quad \neg\exists ra \in \mathfrak{R} : \text{pre}(ra) = (R, S) \} +\end{array} +$$ + +be the completion of $\mathfrak{R}$. + +Then, $(E, \mathfrak{R}')$ is satisfiable iff $\mathcal{L}(\mathcal{G}_1) \cap \mathcal{L}(\mathcal{G}_2) = \emptyset$, where + +$$ +\begin{align*} +E &=_{def} X \sqcap \neg(C \sqcap D) \sqcap Y \sqcap \forall S_1.C \sqcap \forall S_2.D, &\text{with} \\ +X &=_{def} \sqcap_{a \in \Sigma} \exists a.\top &\text{and} \\ +Y &=_{def} \sqcap_{R \in \text{roles}(\mathfrak{R}')}\forall R.(X \sqcap \neg(C \sqcap D)). +\end{align*} +$$ \ No newline at end of file diff --git a/samples/texts/2395852/page_15.md b/samples/texts/2395852/page_15.md new file mode 100644 index 0000000000000000000000000000000000000000..06b0498235aadcbbc10bfd2a765a5dabcd61b04d --- /dev/null +++ b/samples/texts/2395852/page_15.md @@ -0,0 +1,12 @@ +Since $\mathcal{L}(\mathcal{G}_1) \cap \mathcal{L}(\mathcal{G}_2) = \emptyset$ is undecidable, the satisfiability of $(E, \mathfrak{R}')$ is undecid- +able as well. + +We have to show that $(E, \mathfrak{R}')$ is satisfiable iff $\mathcal{L}(\mathcal{G}_1) \cap \mathcal{L}(\mathcal{G}_2) = \emptyset$: + +⇒ We prove the contra-positive: if $\mathcal{L}(\mathcal{G}_1) \cap \mathcal{L}(\mathcal{G}_2) \neq \emptyset$, then $(E, \mathfrak{R}')$ is unsatisfiable. Assume to the contrary that $\mathcal{L}(\mathcal{G}_1) \cap \mathcal{L}(\mathcal{G}_2) \neq \emptyset$, but $(E, \mathfrak{R}')$ is satisfiable. Let $\mathcal{I}$ be a model of $(E, \mathfrak{R}')$. Because $\mathcal{I}$ satisfies $\mathfrak{R}'$, it holds that $\langle x_0, x_n \rangle \in (\bigcup_{R \in \text{roles}(\mathfrak{R}')} R^\mathcal{I})^+$ implies $\langle x_0, x_n \rangle \in *^\mathcal{I}$, where $*^\mathcal{I} =_{def} \bigcup_{R \in \text{roles}(\mathfrak{R}')} R^\mathcal{I}$ is the so-called universal relation. This is ensured by the fact that the composition of two arbitrary roles from $\text{roles}(\mathfrak{R}')$ is always defined in $\mathfrak{R}'$, due to the completion process. Since $\mathcal{I}$ is a model of $E$, there is some $x_0 \in E^\mathcal{I}$. Due to $x_0 \in (X \sqcap Y)^\mathcal{I}$ it holds that $x_0 \in ((\coprod_{a \in \Sigma} \exists a.\top) \sqcap (\coprod_{R \in \text{roles}(\mathfrak{R}')}\forall R.(\coprod_{a \in \Sigma} \exists a.\top)))^\mathcal{I}$. The model $\mathcal{I}$ therefore represents all possible words $w \in \Sigma^+$. Let $w \in \mathcal{L}(\mathcal{G}_1) \cap \mathcal{L}(\mathcal{G}_2)$, with $w = w_1...w_{n-1}w_n$. Obviously, $\mathcal{I}$ is also a model of $\exists w_1...\exists w_n.\top$, with $x_0 \in (\exists w_1...\exists w_n.\top)^\mathcal{I}$. Let $\langle x_0, x_1 \rangle = w_1^\mathcal{I}, ..., x_{n-1}, x_n \rangle = w_n^\mathcal{I}$ be a path in the model corresponding to $w$; $\langle x_0, x_n \rangle \in *^\mathcal{I}$ holds. If $\mathcal{I}$ is a model of $\mathfrak{R}'$, then it is also a model of $\mathfrak{R}$ due to $\mathfrak{R} \subseteq \mathfrak{R}'$, and therefore Lemma 3 is applicable. Due to Lemma 3 we then have $\langle x_0, x_n \rangle \in S_1^\mathcal{I} \cap S_2^\mathcal{I}$ in every model, and since $x_0 \in (\forall S_1.C \sqcap \forall S_2.D)^\mathcal{I}$, $x_n \in (C \sqcap D)^\mathcal{I}$ must also hold in every model. However, this obviously contradicts $x_n \in (\neg(C \sqcap D)^\mathcal{I})$ which must hold because $\langle x_0, x_n \rangle \in *^\mathcal{I}$ and $x_0 \in (\coprod_{R \in \text{roles}(\mathfrak{R}')}\forall R.(\neg(C \sqcap D)^\mathcal{I}))^\mathcal{I}$. This shows that there are no models. $(E, \mathfrak{R}')$ is therefore unsatisfiable. + +⇔ If $\mathcal{L}(\mathcal{G}_1) \cap \mathcal{L}(\mathcal{G}_2) = \emptyset$, then we show that $(E, \mathfrak{R}')$ is satisfiable by constructing an infinite model. The model $\mathcal{I}$ is constructed incrementally, e.g. $\mathcal{I}_0 \subset \mathcal{I}_1 \subset \mathcal{I}_2 \subset \dots \subset \mathcal{I}_\omega$, $\mathcal{I} = \mathcal{I}_\omega$. We refer to the set $\bigcup_{a \in \Sigma} a^\mathcal{I}$ as the *skeleton* of the model $\mathcal{I}$. The skeleton has the form of an infinite tree. An illustration of $\mathcal{I}$ is given in Figure 7; the thick lines correspond to the skeleton. Each node in the model $\mathcal{I}$ has $|\Sigma|$ different *direct successors* in the skeleton; the skeleton of $\mathcal{I}$ is a tree with branching factor $|\Sigma|$. + +For each $n \in \mathbb{N} \cup \{\emptyset\}$, the skeleton of the interpretation $\mathcal{I}_n$ is a tree of depth $n$, encoding all words $w$ with $|w| \le n$, i.e. $w \in \bigcup_{i \in \{0, ..., n\}} \Sigma^i$. Each word $w$ of length $i = |w|$, $i \le n$, corresponds to a path from the root node $x_{0,0}$ to some node $x_{i,m}$ at depth $i$, in all skeletons of the models $\mathcal{I}_n$. Therefore, the skeleton of $\mathcal{I}$ represents all words from $\Sigma^+$. + +Intuitively, the terminal symbols of the words to be parsed by the role box are represented as *direct edges* in the skeleton of the model, whereas the *indirect edges* in this model are inserted to mimic the “bottom-up parsing process” of these words, which is performed by the role box. The construction of $\mathcal{I}$ therefore works as follows: \ No newline at end of file diff --git a/samples/texts/2395852/page_16.md b/samples/texts/2395852/page_16.md new file mode 100644 index 0000000000000000000000000000000000000000..f6787dd09259c1cad9960ea66b8cfdcce40f4549 --- /dev/null +++ b/samples/texts/2395852/page_16.md @@ -0,0 +1 @@ +Figure 7: Illustration of the constructed model for (E, 𝓶') \ No newline at end of file diff --git a/samples/texts/2395852/page_17.md b/samples/texts/2395852/page_17.md new file mode 100644 index 0000000000000000000000000000000000000000..83de44d917b531d62e4eb17b51b83aa4de9d2f9c --- /dev/null +++ b/samples/texts/2395852/page_17.md @@ -0,0 +1,13 @@ +4. $C^{\mathcal{I}_{n+1}} := C^{\mathcal{I}_{n+1}} \cup \{x_{n+1,j} | \in S_1^{\mathcal{I}_{n+1}}\}$ + +5. $D^{\mathcal{I}_{n+1}} := D^{\mathcal{I}_{n+1}} \cup \{x_{n+1,j} | \in S_2^{\mathcal{I}_{n+1}}\}$ + +$\mathcal{I}$ is a model: due to step 3 in the construction we have $\mathcal{I}_n \models \mathfrak{R}'$ for all $n \in \mathbb{N} \cup \{0\}$ (since we have a finite number of role axioms and $\mathcal{I}_n$ is finite as well, the while-loop terminates in finite time), and therefore obviously $\mathcal{I} \models \mathfrak{R}'$. We prove that $x_{0,0} \in E^\mathcal{I}$. Due to the construction it is obviously the case that $x_0 \in ((\coprod_{a \in \Sigma} \exists a.\top) \cap (\coprod_{R \in \text{roles}(\mathfrak{R}')}\forall R.(\coprod_{a \in \Sigma} \exists a.\top)) \cap \forall S_1.C \sqcap \forall S_2.D)^\mathcal{I}$: for each node $x_{i,j} \in \Delta^\mathcal{I}$, we have $ \in *^\mathcal{I}$ (recall that $*^\mathcal{I} =_{def} \bigcup_{R \in \text{roles}(\mathfrak{R}')}\mathbb{R}_R^\mathcal{I}$), and each node has the required $k = |\Sigma|$ successors, $a_1, \dots, a_k$. This holds for $x_{0,0}$ as well as for $x_{i,j}$. This shows that $x_{0,0} \in X^\mathcal{I}$, $x_{i,j} \in X^\mathcal{I}$, and therefore $x_{0,0} \in (\forall R.(\coprod_{a \in \Sigma} \exists a.\top))^\mathcal{I}$. It also holds that $x_{0,0} \in (\coprod_{R \in \text{roles}(\mathfrak{R}')}\forall R.\neg(C \sqcap D))^\mathcal{I}$. Assume the contrary: then there must be some successor node $x_{n,i_n} \in \Delta^\mathcal{I}$ with $x_{n,i_n} \in C^\mathcal{I}$, $x_{n,i_n} \in D^\mathcal{I}$. Since this node lies at depth $n$, it holds that $x_{n,i_n} \in \Delta_{i_n}^\mathcal{I}$ with $x_{n,i_n} \in C^\mathcal{I}_n$, $x_{n,i_n} \in D^\mathcal{I}_n$. Due to the construction, $x_{n,i_n} \in C^\mathcal{I}_n$ iff $\langle x_{0,0}, x_{n,i_n} \rangle \in S_1^{\mathcal{I}_n}$, and $x_{n,i_n} \in D^{\mathcal{I}_n}$ iff $\langle x_{0,0}, x_{n,i_n} \rangle \in S_2^{\mathcal{I}_n}$. Let $w$ be the corresponding path of length $n$ in the skeleton with $w = w_1 \dots w_n$, $\langle x_{0,0}, x_{1,i_1} \rangle = w_1^\mathcal{I}$, $\dots, \langle x_{n-1,i_{n-1}}, x_{n,i_n} \rangle = w_n^\mathcal{I}$, with $w_i \in \{a_1, a_2, i_1, \dots, i_k\}$, leading from $x_{0,0}$ to $x_{n,i_n}$. But then, due to Lemma 3, $w \in L(G_1) \cap L(G_2)$, due to $\langle x_{0,0}, x_{n,i_n} \rangle \in S_1^{\mathcal{I}_n} \cap S_2^{\mathcal{I}_n}$. Contradiction. Summing up we have shown that $\mathcal{I} \models (E, \mathfrak{R}')$. $\square$ + +# 6 Discussion & Conclusion + +We have proven that the satisfiability problem of $\mathcal{ALC}_{\mathcal{RA}\ominus}$ is undecidable. As already noted, this is a severe result, due to the high relevance of axioms having the form $S \circ T \sqsubseteq R_1 \sqcup \dots \sqcup R_n$ in the field of qualitative (spatial or temporal) reasoning. + +Considering the proof, it can be seen that not the whole expressiveness of $\mathcal{ALC}$ was needed in order to show the undecidability. In fact, the existential restrictions used in the proof have only the form $\exists R.\top$, no qualified existential restrictions were needed ($\exists R \equiv \exists R.\top$). Additionally, we did not make use of disjunctions on the right hand side of the role axioms in the proof – all role axioms were of the form $R \circ S \sqsubseteq T$. The negation operator was only used within $E$ in the form $\neg(C \sqcap D)$, which can be rewritten as $\neg C \sqcup \neg D$. Therefore, no full negation operator is needed; it is sufficient if the DL provides negation for concept names. Summing up, the language $\mathcal{ALU}$ with deterministic role boxes, called $\mathcal{ALU}_{\mathcal{RA}\ominus}$, is already undecidable.⁵ + +⁵$\mathcal{ALU}$ provides negation for concept names, disjunction $\sqcup$, universal qualification $\forall R.C$ \ No newline at end of file diff --git a/samples/texts/2395852/page_18.md b/samples/texts/2395852/page_18.md new file mode 100644 index 0000000000000000000000000000000000000000..989d9749d323f8510bede726fac6c20dc68a6519 --- /dev/null +++ b/samples/texts/2395852/page_18.md @@ -0,0 +1,11 @@ +Figure 8: Illustration of an $\mathcal{ALC}_{RA}\ominus$ model of $\mathfrak{R}$ and $\exists R.((\exists S.\exists T.\top) \sqcap \forall Y.\bot) \sqcap \forall A.\bot$. The same concept is unsatisfiable in $\mathcal{ALC}_{RA}$ w.r.t. $\mathfrak{R}$. + +It is obvious that special kinds of role boxes lead to decidability. For example, if we restrict the set of admissible role boxes to role boxes of the form $\{R_1 \circ R_1 \sqsubseteq R_1, \dots, R_n \circ R_n \sqsubseteq R_n\}$, we get a syntactic variation of the logic $\mathcal{ALC}_{R+}$, with $R_1 \dots R_n$ declared as transitively closed roles. However, before considering special role boxes that might yield decidability and therefore special $\mathcal{ALC}_{RA}\ominus$ fragments, it is very important to understand the principal limitations which was the motivation for this investigation. The question remains whether admissible role boxes can be found which are still useful for spatial reasoning tasks. Obviously, the syntactic restrictions should not be stronger than necessary. For example, considering a syntactic variation of $\mathcal{ALC}_{R+}$ makes no sense. + +In the following we will only briefly sketch why the undecidability result given here does not immediately apply to $\mathcal{ALC}_{RA}$. Let us examine the difference between $\mathcal{ALC}_{RA}$ and $\mathcal{ALC}_{RA}\ominus$. Considering the two different but very similar looking languages, the question arises whether $\mathcal{ALC}_{RA}$ is in fact subsumed by $\mathcal{ALC}_{RA}\ominus$. + +The language $\mathcal{ALC}_{RA}$ requires that all roles have to be interpreted as disjoint: for any two different roles $R, S, R \neq S$ and any interpretation $\iota$, $R^\iota \cap S^\iota = \emptyset$ must hold. The calculus given in [22] requires that the role boxes are unique: for any pair of roles $R, S$, there is at most one role axiom $ra \in \mathfrak{R}$ with $\text{pre}(ra) = (R, S)$. The disjointness for roles really makes a difference: for example, the concept $\exists R.((\exists S.\exists T.\top) \sqcap \forall Y.\bot) \sqcap \forall A.\bot$ w.r.t. $\mathfrak{R} = \{R \circ S \sqsubseteq A \cup B, S \circ T \sqsubseteq X \cup Y, A \circ T \sqsubseteq U, B \circ T \sqsubseteq V, R \circ X \sqsubseteq U, R \circ Y \sqsubseteq V\}$ is satisfiable in $\mathcal{ALC}_{RA}\ominus$, but unsatisfiable in $\mathcal{ALC}_{RA}$ (see Figure 8). This is due to the fact that a non-empty role intersection between $U$ and $V$ is enforced, violating the disjointness requirement, yielding unsatisfiability only in the case of $\mathcal{ALC}_{RA}$. + +$\mathcal{ALC}_{RA}$ cannot be easily reduced to $\mathcal{ALC}_{RA}\ominus$, even though $\mathcal{ALC}_{RA}\ominus$ seems to + +and unqualified existential quantification $\exists R$. \ No newline at end of file diff --git a/samples/texts/2395852/page_19.md b/samples/texts/2395852/page_19.md new file mode 100644 index 0000000000000000000000000000000000000000..40d28b5acb45fb6908a6f4242b30c1ebc8cb3ca2 --- /dev/null +++ b/samples/texts/2395852/page_19.md @@ -0,0 +1,11 @@ +be the more general description logic, since there are less restrictions. If (C, 𝓶) is satisfiable in 𝒜LCRA, then it is also satisfiable in 𝒜LCRA∘ as well, but not vice versa. One idea to enforce role disjointness within 𝒜LCRA∘ might be the following: for each role pair R, S with R ≠ S to be declared as disjoint, create a new atomic “marker concept”, e.g. [RS], and add two universal value restrictions like ∀R.[RS]∩∀S.¬[RS] conjunctively to the original concept C. (C, 𝓶) would be transformed into (C ∩ ∀R.[RS] ∩ ∀S.¬[RS] ∩ ... ∩ 𝓶), for each pair of disjoint roles R, S. Let 𝒯 be a model of the latter, and let x₀ ∈ (C ∩ ∀R.[RS] ∩ ∀S.¬[RS] ∩ ...)ᵀ. Unfortunately, this only ensures that ({⟨x₀, xᵢ⟩ | xᵢ ∈ Δᵀ} ∩ Rᵀ ∩ Sᵀ) = ∅, which is obviously a much too weak requirement. As a “solution” one might think that the universal role * might be used in order to propagate ∀R.[RS] ∩ ∀S.¬[RS] ∩ ... to every individual in the model. (C, 𝓶) would be transformed into (C ∩ ∀R.[RS] ∩ ∀S.¬[RS] ∩ ... ∩ ∀*. (∀R.[RS] ∩ ∀S.¬[RS] ∩ ...)’, 𝓶’). Unfortunately, this is a much stronger requirement than disjointness for roles, since the additional conjunct now enforces {xⱼ | ⟨xᵢ, xⱼ⟩ ∈ Rᵀ} ∩ {xⱼ | ⟨xᵢ, xⱼ⟩ ∈ Sᵀ} = ∅, which obviously implies Rᵀ ∩ Sᵀ = ∅. + +Instead, one would need some kind of “counting construct” that would enable +the distinction of different individuals in order to simulate the role disjointness +of $\mathcal{ALC}_{RA}$ within $\mathcal{ALC}_{RA\ominus}$. We therefore believe that disjoint roles are really +something very special that cannot be simulated by means of any $\mathcal{ALC}_{RA\ominus}$ con- +struction. Therefore, we conjecture that $\mathcal{ALC}_{RA}$ is not subsumed by $\mathcal{ALC}_{RA\ominus}$. + +In the undecidability proof we enforced the existence of the appropriate successors $a_1, \dots, a_k$ with $\Sigma = \{a_1, \dots, a_k\}$ for every node in the model. The existence of every word $w$ in the model was therefore granted. For this purpose the role box was completed, using the auxiliary role $R_?$. In the case of $\mathcal{ALC}_{RA}$, we can make use of the same construction. However, due to the disjointness requirement, things get more complicated. + +Considering the reduction, the unsatisfiability is due to the fact that $\forall S_1.C \sqcap \forall S_2.D$ is used to assert $C$ and $D$ to one and the same individual from the root node in order to produce an inconsistency via $\neg(C \sqcap D)$ which holds for all individuals. Obviously, with a disjointness requirement on $S_1$ and $S_2$, the presence of $S_1$ and $S_2$ connecting the nodes $x_{0,0}$ and $x_{i,j}$ such that $\langle x_{0,0}, x_{i,j} \rangle \in S_1^T \sqcap S_2^T$ is an inconsistency by itself. Therefore, $\forall S_1.C \sqcap \forall S_2.D$ is useless in order to yield an inconsistency, since the inconsistency is present in the first place due to the disjointness requirement. If the same proof technique could be applied for showing the undecidability of $\mathcal{ALC}_{RA}$, then neither disjunctions nor atomic negation would be needed for this undecidability proof: if $X =_{def} \exists a_1 \sqcap \dots \sqcap \exists a_k$, then $(X \sqcap (\sqcap_{R \in \text{roles}(\mathfrak{R}')}) \forall R.X), \mathfrak{R}')$ would suffice to prove the undecidability! This concept is already expressible in the language $\mathcal{FL}^{-}$, showing even the undecidability of $\mathcal{FL}_{RA}^{-}$. This would indicate that composition-based role axioms are really highly problematic language constructs, since adding them to one of the \ No newline at end of file diff --git a/samples/texts/2395852/page_2.md b/samples/texts/2395852/page_2.md new file mode 100644 index 0000000000000000000000000000000000000000..fbb17bf31fdf59ca7bc2301a722d2baa43412065 --- /dev/null +++ b/samples/texts/2395852/page_2.md @@ -0,0 +1,33 @@ +An interpretation $\mathcal{I}$ satisfies / is a model of $(C, \mathfrak{R}, \mathfrak{T})$, written $\mathcal{I} \models (C, \mathfrak{R}, \mathfrak{T})$, iff $\mathcal{I} \models C$, $\mathcal{I} \models \mathfrak{R}$ and $\mathcal{I} \models \mathfrak{T}$. + +**Definition 7 (Satisfiability)** A syntactic entity (concept, role box, concept with role box, etc.) is called *satisfiable* iff there is an interpretation which satisfies this entity; i.e. the entity has a model. ■ + +Then, the satisfiability problem is to decide whether a syntactic entity is satisfi- +able or not. + +An important relationship between concepts is the subsumption relationship, +which is a partial ordering on concepts w.r.t. their specificity: + +**Definition 8 (Subsumption Relationship)** A concept *D* subsumes a concept *C*, *C* $\subseteq$ *D*, iff $C^I \subseteq D^I$ holds for all interpretations *I*. $\square$ + +Since $\mathcal{ALC}_{RA\ominus}$ provides a full negation operator, the subsumption problem can +be reduced to the concept satisfiability problem: $C \sqsubseteq D$ iff $C \sqcap \neg D$ is unsatis- +fiable. + +It should be noticed that a satisfiability tester for $\mathcal{ALC}_{RA\ominus}$ would also be able +to determine satisfiability resp. subsumption w.r.t. free TBoxes. Each concept +inclusion axiom can be dealt with by a technique called *internalization* (see +[13, 14, 1]). Internalization for $\mathcal{ALC}_{RA\ominus}$ works as follows. Let $(C, \mathfrak{R}, \mathfrak{T})$ be the +concept, role box and free TBox to be tested for satisfiability. Let $R? \in \mathcal{N}_R$ be +some role such that $R? \notin \text{roles}(C) \cup \text{roles}(\mathfrak{R})$. Referring to $R?$ and $(C, \mathfrak{R}, \mathfrak{T})$, +the role box $\mathfrak{R}$ is completed: $\mathfrak{R}' = \mathfrak{R} \cup \{ R \circ S \sqsubseteq R? \mid R, S \in (\{R?\} \cup \text{roles}(\mathfrak{R})),$ +$\neg\exists ra \in \mathfrak{R}: \text{pre}(ra) = (R, S) \}$. + +Now, $(C, \mathfrak{R}, \mathfrak{T})$ is satisfiable iff $((C \sqcap M_{\mathfrak{T}} \sqcap \forall *. M_{\mathfrak{T}}), \mathfrak{R}')$ is satisfiable, where +$\forall *. M_{\mathfrak{T}}$ is an abbreviation for $\forall (\sqcup_{R \in \text{roles}(\mathfrak{R}')} R). M_{\mathfrak{T}}$. $M_{\mathfrak{T}}$ is the so-called *meta-constraint* corresponding to the TBox $\mathfrak{T}: M_{\mathfrak{T}} =_{def} \sqcap_{C \sqsubseteq D \in \mathfrak{T}} (\neg C \sqcup D)$. + +# 3 Relationships to Other Logics + +In order to judge the expressive power of $\mathcal{ALC}_{RA\ominus}$ we consider other logics and examine whether they are subsumed¹ by $\mathcal{ALC}_{RA\ominus}$. We briefly sketch the relationships to the most important base description logics offering some form of transitivity. Additionally, (un)decidability results regarding transitivity extensions of the so-called (loosely) guarded fragment of FOPL are briefly discussed + +¹We say that a language *A* is “subsumed” by a language *B* (resp. provides the same expressive power) iff the satisfiability problem of *A* can be reduced to the satisfiability problem of *B*. \ No newline at end of file diff --git a/samples/texts/2395852/page_20.md b/samples/texts/2395852/page_20.md new file mode 100644 index 0000000000000000000000000000000000000000..9a54fa3a0f9800a0dcf6107074439ae11569989c --- /dev/null +++ b/samples/texts/2395852/page_20.md @@ -0,0 +1,20 @@ +simplest of all description logics $\mathcal{FL}^{-}$ would already yield undecidability. However, it is an open question whether the exploited proof technique can indeed be applied. + +Considering the tableaux calculus for $\mathcal{ALC}_{RA}$ given in [22], we only conjectured that $\mathcal{ALC}_{RA}$ might be decidable. We did not prove it. The given tableaux calculus was incomplete since it suffered from the definition of a so-called *blocking condition*. The tableaux calculus was presented in the expectation that an appropriate blocking condition could be found in the future. However, we still have not found a correct blocking condition for $\mathcal{ALC}_{RA}$. On the other hand, the reader should be informed that we have also carefully tried to reduce various other known undecidable problems to $\mathcal{ALC}_{RA}$, but without success (e.g. the Domino Problem). + +As the investigation has shown, the exact position of the boundary line between +decidable and undecidable description logics with composition-based role axioms +of the form $S \circ T \sqsubseteq R_1 \sqcup \dots \sqcup R_n$ must be investigated much more thoroughly +in the future. + +7 Acknowledgments + +I would like to thank Harald Ganzinger, Volker Haarslev, Ullrich Hustadt, Amar Isli, Carsten Lutz, Thomas Mantay, Bernd Neumann, Ralf Möller and Anni-Yasmin Turhan for valuable discussions on the topics covered in this paper. I am especially grateful to Harald Ganzinger and Carsten Lutz. Both read a draft of this paper and suggested modifications in the proof to improve its comprehensibility. + +References + +[1] F. Baader. Augmenting concept languages by transitive closure of roles: An alternative to terminological cycles. In *Twelfth International Conference on Artificial Intelligence, Darling Harbour, Sydney, Australia, Aug. 24-30, 1991*, pages 446–451, August 1991. + +[2] R.J. Brachman and J.G. Schmolze. An overview of the KL-ONE knowledge representation system. *Cognitive Science*, pages 171–216, August 1985. + +[3] M. Buchheit, F.M. Donini, and A. Schaerf. Decidable reasoning in terminological knowledge representation systems. *Journal of Artificial Intelligence Research*, 1:109–138, 1993. \ No newline at end of file diff --git a/samples/texts/2395852/page_21.md b/samples/texts/2395852/page_21.md new file mode 100644 index 0000000000000000000000000000000000000000..e0d2b71badc59af9d1f5ebfde3e48d18988cea9d --- /dev/null +++ b/samples/texts/2395852/page_21.md @@ -0,0 +1,17 @@ +[4] F.M. Donini, M. Lenzerini, D. Nardi, and W. Nutt. The complexity of concept languages. Technical Report RR-95-07, German Center for AI (DFKI), 1995. + +[5] F.M. Donini, M. Lenzerini, D. Nardi, and A. Schaerf. Reasoning in description logics. In G. Brewka, editor, *Principles of Knowledge Representation*. CSLI Publications, 1996. + +[6] M.J. Egenhofer. Reasoning about binary topological relations. In O. Günther and H.-J. Schek, editors, *Advances in Spatial Databases, Second Symposium, SSD’91, Zurich, Aug. 28-30, 1991*, volume 525 of *Lecture Notes in Computer Science*, pages 143–160. Springer Verlag, Berlin, August 1991. + +[7] H. Ganzinger, Chr. Meyer, and M. Veanes. The two-variable guarded fragment with transitive relations. In *Proc. 14th IEEE Symposium on Logic in Computer Science*, pages 24–34. IEEE Computer Society Press, 1999. To appear in LICS’99. + +[8] E. Grädel. Guarded fragments of first-order logic: a perspective for new description logics? Extended abstract, Proceedings of 1998 International Workshop on Description Logics DL ‘98, Trento 1998, CEUR Electronic Workshop Proceedings, http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/Vol-11. + +[9] E. Grädel. On the restraining power of guards. (24 pages), to appear in *Journal of Symbolic Logic*. + +[10] E. Grädel, M. Otto, and E. Rosen. Undecidability Results for Two-Variable Logics. *Archive for Mathematical Logic*, 38:313–354, 1999. See also: Proceedings of 14th Symposium on Theoretical Aspects of Computer Science STACS’97, Lecture Notes in Computer Science No. 1200, Springer 1997, 249–260. + +[11] V. Haarslev, C. Lutz, and R. Möller. A description logic with concrete domains and a role-forming predicate operator. *Journal of Logic and Computation*, 9(3):351–384, June 1999. + +[12] V. Haarslev, R. Möller, A.-Y. Turhan, and M. Wessel. On terminological default reasoning about spatial information: Extended abstract. In P. Lambrix et al., editor, *Proceedings of the International Workshop on Description Logics (DL’99), July 30 - August 1, 1999, Linköping, Sweden*, pages 155–159, June 1999. \ No newline at end of file diff --git a/samples/texts/2395852/page_22.md b/samples/texts/2395852/page_22.md new file mode 100644 index 0000000000000000000000000000000000000000..0a9c58272b7c2191e17cfe52ad1638da06583ca1 --- /dev/null +++ b/samples/texts/2395852/page_22.md @@ -0,0 +1,21 @@ +[13] I. Horrocks and U. Sattler. A description logic with transitive and inverse roles and role hierarchies. *Journal of Logic and Computation*, 9(3):385–410, 1999. + +[14] I. Horrocks, U. Sattler, and S. Tobies. Practical reasoning for expressive description logics. In *Proceedings of the 6th International Conference on Logic for Programming and Automated Reasoning (LPAR '99)*, 1999. + +[15] C. Lutz and R. Möller. Defined topological relations in description logics. In M.-C. Rousset et al., editor, *Proceedings of the International Workshop on Description Logics, DL'97, Sep. 27-29, 1997, Gif sur Yvette, France*, pages 15–19. Universite Paris-Sud, Paris, September 1997. + +[16] D. A. Randell, Z. Cui, and A. G. Cohn. A Spatial Logic based on Regions and Connections. In B. Nebel, C. Rich, and W. Swartout, editors, *Principles of Knowledge Representation and Reasoning*, pages 165–176, 1992. + +[17] U. Sattler. A concept language extended with different kinds of transitive roles. In G. Görz and S. Hölldobler, editors, *20. Deutsche Jahrestagung für Künstliche Intelligenz*, number 1137 in Lecture Notes in Artificial Intelligence, pages 333–345. Springer Verlag, Berlin, 1996. + +[18] K. Schild. A correspondence theory for terminological logics: Preliminary report. In *Twelfth International Conference on Artificial Intelligence, Darling Harbour, Sydney, Australia, Aug. 24-30, 1991*, pages 466–471, August 1991. + +[19] M. Schmidt-Schauß. Subsumption in KL-ONE is Undecidable. In *Principle of Knowledge Representation and Reasoning – Proceedings of the First International Conference KR '89*, 1989. + +[20] M. Schmidt-Schauß and G. Smolka. Attributive concept descriptions with complements. *Artificial Intelligence*, 48:1–26, 1991. + +[21] U. Schöning. *Theoretische Informatik kurz gefasst*. BI-Wissenschaftsverlag, 1. edition, 1992. + +[22] M. Wessel, V. Haarslev, and R. Möller. *ALCRA – ALC* with Role Axioms. In F. Baader and U. Sattler, editors, *Proceedings of the International Workshop in Description Logics 2000 (DL2000)*, number 33 in CEUR-WS, pages 21–30, Aachen, Germany, August 2000. RWTH Aachen. Proceedings online available from http://SunSITE.Informatik.RWTH-Aachen.DE/Publications/CEUR-WS/Vol-33/. + +[23] W.A. Woods and J.G. Schmolze. The KL-ONE family. In F. Lehmann, editor, *Semantic Networks in Artificial Intelligence*, pages 133–177. Pergamon Press, Oxford, 1992. \ No newline at end of file diff --git a/samples/texts/2395852/page_23.md b/samples/texts/2395852/page_23.md new file mode 100644 index 0000000000000000000000000000000000000000..65f45016b3b0950ca2a4393e14e9bbebca5e9f6f --- /dev/null +++ b/samples/texts/2395852/page_23.md @@ -0,0 +1,16 @@ +Obstacles on the Way to +Spatial Reasoning with Description Logics: +Undecidability of $\mathcal{ALC}_{RA}\ominus$ + +Michael Wessel + +University of Hamburg, Computer Science Department, +Vogt-Kölln-Str. 30, 22527 Hamburg, Germany + +Abstract + +This paper presents the new description logic $\mathcal{ALC}_{RA}\ominus$. $\mathcal{ALC}_{RA}\ominus$ combines the well-known standard description logic $\mathcal{ALC}$ with composition-based role axioms of the form $S \circ T \sqsubseteq R_1 \sqcup \cdots \sqcup R_n$. We argue that these axioms are nearly indispensable components in a description logic framework suitable for qualitative spatial reasoning tasks. An $\mathcal{ALC}_{RA}\ominus$ spatial reasoning example is presented, and the relationships to other descriptions logics are discussed (namely $\mathcal{ALC}_{R_A}$, $\mathcal{ALC}_{R+}$, $\mathcal{ALC}_\oplus$, $\mathcal{ALCH}_{R+}$). Unfortunately, the satisfiability problem of this new logic is undecidable. Due to the high relevance of role axioms of the proposed form for all kinds of qualitative reasoning tasks, the undecidability of $\mathcal{ALC}_{RA}\ominus$ is an important result. + +# 1 Introduction and Motivation + +Since the introduction of KL-ONE (see [2]), knowledge representation systems based on description logics (DLs) have been proven valuable tools in the field of formal knowledge representation. Description logic systems offer formally defined syntax and semantics, which enables the unambiguous specification of the services offered to users of these systems. In fact, many early knowledge representation systems and frameworks suffered from unclear semantics (e.g. see [23] for an overview and discussion). In many cases the underlying base description logic of a DL-based system can be seen as a subset of first order predicate logic (FOPL). In contrast to FOPL, decidability of diverse inference problems is usually guaranteed for description logics, for example, for the *satisfiability problem* \ No newline at end of file diff --git a/samples/texts/2395852/page_24.md b/samples/texts/2395852/page_24.md new file mode 100644 index 0000000000000000000000000000000000000000..1c3f13624db29f7ba4fc3a1ebf9496a7e9d492f2 --- /dev/null +++ b/samples/texts/2395852/page_24.md @@ -0,0 +1,9 @@ +of formulas. Please recall that the satisfiability problem is only semi-decidable for FOPL. Moreover, for (less expressive) description logics even tractable (de-terministic polynomial-time) inference algorithms have been found (see [4, 5]). The merits of description logics are widely recognized, and a remarkable amount of research covering theory and practice has been carried out during the last 20 years. However, mediation between expressiveness and tractability remained a problem. + +Description logics focus on the structural description of unary and binary predi-cates. Unary predicates are called *concepts*, and binary predicates correspond to so-called *roles*. Sometimes DLs are even called *concept description languages* – indicating that the focus has traditionally been more on the side of the concept descriptions than on the side of the role descriptions. In our opinion the ability to interrelate roles via some kind of constraints has not been investigated as thoroughly as the concept description side of DLs. For example, see [3] for a description logic providing *role conjunction*. In contrast to role conjunction, *role disjunction* is not interesting in most description logics. *Role negation* has only been considered very recently. Role inclusion axioms have also been considered. However, the space of possibilities for role axioms resp. formulas relating roles to one another has not been exhaustively examined. To the best of our knowledge, the concept satisfiability problem w.r.t. to a set of role axioms of the proposed form has not been considered before. + +Like for formulas in FOPL, the syntax of the concepts is determined by a set of concept forming operators and a set of atomic components, so-called role names and concept names. The semantics of the syntactic elements is then specified by giving a Tarski-style *interpretation*. An interpretation maps concepts and roles to unary resp. binary relations on the non-empty interpretation domain: concepts are therefore mapped to subsets of the interpretation domain, and roles to sets of tuples of domain objects. The denoted (unary or binary) relation is also called the *extension* of the concept or role. + +If the semantics of the operators is preserved by the mapping and the extension of a concept is non-empty, then the interpretation is said to be a *model* of that concept. Given an arbitrary concept *C* of the language, the most important inference problem is to decide whether *C* has a model. In this case, *C* is called *satisfiable*. + +Before we discuss the modeling of *spatial* concepts, let us consider some non-spatial concepts. For example, the unary FOPL predicate `father_withSon(x)` could be defined by means of the FOPL formula `human(x) ∧ male(x) ∧ ∃y : has_child(x, y) ∧ human(y) ∧ male(y)`. Here, `human` and `male` are unary predicate names, whereas `has_child` is a binary predicate name. Translated into the variable-free description logic syntax we would get `human ⊓ male ⊓` \ No newline at end of file diff --git a/samples/texts/2395852/page_25.md b/samples/texts/2395852/page_25.md new file mode 100644 index 0000000000000000000000000000000000000000..c551458c051389697a3e352f7bc4635ac559d772 --- /dev/null +++ b/samples/texts/2395852/page_25.md @@ -0,0 +1,28 @@ +Figure 1: Simple Example + +∃has\_child.(human ⊓ male). The whole expression is a concept, human and male are concept names or atomic concepts, has\_child is a role (name), and ⊓ and ∨ are concept-forming operators. + +Obviously, if roles are not related to one another by some kind of con- +straints, we cannot claim to have represented inherent properties that would +be natural for some relationships. For example, in order to appropriately +capture the meaning of the relationship “niece”, one would have to en- +sure that a brother’s or a sister’s daughter is indeed a niece of this very +same person. In FOPL this requirement could be expressed by means of +two (conjunctively combined) universally quantified statements of the form +∀x,y,z : (has\_brother(x,y) ∧ has\_daughter(y,z) ⇒ has\_niece(x,z)) and +∀x,y,z : (has\_sister(x,y) ∧ has\_daughter(y,z) ⇒ has\_niece(x,z)). The in- +terpretation of the role has\_niece is then no longer independent from the inter- +pretations of the roles has\_brother (resp. has\_sister) and has\_daughter. The +role axioms of $\mathcal{ALC}_{RA}\ominus$ allow to express global universally quantified impli- +cation statements exactly like these: in fact, these formulas are equivalent +to the $\mathcal{ALC}_{RA}\ominus$ role axioms has\_brother $\circ$ has\_daughter $\sqsubseteq$ has\_niece and +has\_sister $\circ$ has\_daughter $\sqsubseteq$ has\_niece. + +In the following we assume that the reader is familiar with description logics, at least with the basic logic $\mathcal{ALC}$ (see [20] and [23] for an introduction). Basically $\mathcal{ALC}_{RA}\ominus$ augments the standard description logic $\mathcal{ALC}$ with composition-based role axioms of the form $S \circ T \sqsubseteq R_1 \sqcup \dots \sqcup R_n$, $n \ge 1$, enforcing $S^\mathcal{T} \circ T^\mathcal{T} \sqsubseteq R_1^\mathcal{T} \cup \dots \cup R_n^\mathcal{T}$ on the models $\mathcal{I}$. This corresponds to a universally quantified FOPL formula of the form $\forall x,y,z : (S(x,y) \land T(y,z) \Rightarrow R_1(x,z) \lor \dots \lor R_n(x,z))$. A finite set of these role axioms is called a *role box* and is denoted by $\mathfrak{R}$. + +Please consider the $\mathcal{ALC}$ concept $(\exists R.\exists S.C) \sqcap \forall T.\neg C$; in FOPL: + +((∃[x>((∃[y](R(x,y) ∧ ∃[x](S(y,x) ∧ C(x)))) ∧ (∀[y](T(x,y) ⇒ ¬C(y))))). + +Obviously, this concept is satisfiable, since the FOPL formula is satisfiable. +However, the same concept is unsatisfiable in $\mathcal{ALC}_{RA}\ominus$ w.r.t. the role box \ No newline at end of file diff --git a/samples/texts/2395852/page_26.md b/samples/texts/2395852/page_26.md new file mode 100644 index 0000000000000000000000000000000000000000..6f6c6fbffdaa74762f8dc96e9f946ef7e25be4f9 --- /dev/null +++ b/samples/texts/2395852/page_26.md @@ -0,0 +1,35 @@ +Figure 2: Complex Example + +{R ⋄ S ⊑ T}. Again, translated into FOPL we have + +$$ +\begin{array}{l} +(\forall[x,y,z](R(x,y) \land S(y,z) \Rightarrow T(x,z))) \land \\ +(\exists[x]((\exists[y](R(x,y) \land \exists[x](S(y,x) \land C(x)))) \land (\forall[y](T(x,y) \Rightarrow \neg C(y))))). +\end{array} +$$ + +The only difference to the former formula is the additional con- +junct in the first line, expressing the role axiom. In fact, +($\forall[x,y,z](R(x,y) \land S(y,z) \Rightarrow T(x,z))$) enforces the presence of $T(x,z)$ +because of $[\exists[x]((\exists[y](R(x,y) \land \exists[x](S(y,x) \land C(x))))...)$, and then the +qualification $\forall[y](T(x,y) \Rightarrow \neg C(y))$ is applicable, yielding an inconsistency +since also $C$ holds for this individual. + +As another example taken from the realm of genealogy, let us consider the concept expression + +$(\exists has\_brother . \exists has\_sister . \exists has\_sister . \exists has\_daughter . \exists has\_sister .$ +computer\_science\_student) \sqcap (\forall has\_niece . \neg computer\_science\_student) + +w.r.t. the role box + +{ has_brother ○ has_sister ⊑ has_sister, +has_sister ○ has_daughter ⊑ has_niece, +has_daughter ○ has_sister ⊑ has_daughter, +has_sister ○ has_sister ⊑ has_sister }. + +A careful inspection reveals that this concept is inconsistent w.r.t. this role box, +since the computer science student plays also the role of a niece and is therefore +a filler of the *has_niece* role, see Figure 2. + +Note that composition of roles is not allowed to appear on the right hand side of role axioms. One can therefore not write axioms like *has_niece* ⊑ (*has_brother* ◦ *has_daughter*) ∪ (*has_sister* ◦ *has_daughter*). The rationale for this restriction is that it is known since 1989 that allowing composition also on the right hand side of role axioms would yield a form of undecidability that is \ No newline at end of file diff --git a/samples/texts/2395852/page_27.md b/samples/texts/2395852/page_27.md new file mode 100644 index 0000000000000000000000000000000000000000..d68b456c98f7dc46c9b308335eaa2d2204114829 --- /dev/null +++ b/samples/texts/2395852/page_27.md @@ -0,0 +1,13 @@ +also present in the so-called *role value maps* (see [19]). For the same reason, $\mathcal{ALC}_{RA\ominus}$ does not include *inverse roles*. As we show in this paper, it suffices to allow composition on the left-hand side to make the resulting logic undecid-able. This is a new and unexpected result. The proof techniques applied in [19] to show the undecidability of role value maps cannot be exploited to show the undecidability of $\mathcal{ALC}_{RA\ominus}$, because the proof given in [19] strongly depends on the presence of role compositions on the right hand side of implication axioms. + +As discussed below in the spatial reasoning example, axioms of the form $S \circ T \sqsubseteq R_1 \sqcup \dots \sqcup R_n$ seem to be indispensable components in a description logic framework suitable for qualitative spatial reasoning tasks. The discov-ered undecidability result is therefore a big obstacle on the way to a full-fledged spatial-reasoning description-logic framework which would even need more ex-pressiveness than provided by $\mathcal{ALC}_{RA\ominus}$. For example, in order to truly capture the semantics of qualitative spatial relationships like the ones discussed below, *inverse roles* and additional role disjointness declarations would be needed. + +In [22], we presented the logic $\mathcal{ALC}_{RA}$. The only difference between $\mathcal{ALC}_{RA}$ and $\mathcal{ALC}_{RA\ominus}$ is that the former requires that all roles are interpreted as disjoint, i.e. for any two roles $R, S$ with $R \neq S$ and any interpretation $\mathcal{I}, R^\mathcal{I} \cap S^\mathcal{I} = \emptyset$ must hold. Even though this seems to be a minor variation of $\mathcal{ALC}_{RA\ominus}$, in fact it is not, because the disjointness requirement for roles has a number of non-obvious and far-reaching consequences (see [22]). The undecidability proof given here does not apply to $\mathcal{ALC}_{RA}$, so the question whether $\mathcal{ALC}_{RA}$ is decidable or not is still open. + +The structure of this paper is as follows: first we will formally define the syntax and semantics of $\mathcal{ALC}_{RA\ominus}$. Then, the relationships to other known description logics providing some kind of transitive roles are sketched. The usefulness of $\mathcal{ALC}_{RA\ominus}$ in a spatial reasoning scenario is exemplified in the next section. The main contribution of this paper is the undecidability proof in Section 5. Finally, we conclude by discussing whether $\mathcal{ALC}_{RA}$ might be undecidable as well, and future work is outlined. In the search of a decidable description logic with composition-based role axioms of the proposed form, a promising idea is to impose certain syntactic restrictions on the allowed role boxes. These syntactic restrictions have to be worked out in the future. + +## 2 Syntax and Semantics of $\mathcal{ALC}_{RA\ominus}$ + +In the following the set of well-formed concepts of $\mathcal{ALC}_{RA\ominus}$ is specified: + +**Definition 1 (Concept Expressions)** Let $\mathcal{N}_C$ be a set of concept names, and let $\mathcal{N}_R$ be a set of role names (roles for short), such that $\mathcal{N}_C \cap \mathcal{N}_R = \emptyset$. The set \ No newline at end of file diff --git a/samples/texts/2395852/page_28.md b/samples/texts/2395852/page_28.md new file mode 100644 index 0000000000000000000000000000000000000000..5c8cd0998845c64952251603ab596bc016cdbd16 --- /dev/null +++ b/samples/texts/2395852/page_28.md @@ -0,0 +1,25 @@ +of concept expressions (or concepts for short) is the smallest inductively defined set such that + +1. Every concept name $C \in \mathcal{N}_C$ is a concept. + +2. If $C$ and $D$ are concepts, and $R \in \mathcal{N}_R$ is a role, then the following expressions are concepts as well: $(\neg C)$, $(C \sqcap D)$, $(C \sqcup D)$, $(\exists R.C)$, and $(\forall R.C)$. + +3. Nothing else is a concept. ■ + +The set of concepts is the same as for the language $\mathcal{ALC}$. If a concept starts with “(”, we call it a compound concept, otherwise a concept name or atomic concept. Brackets may be omitted for the sake of readability if the concept is still uniquely parsable. + +We use the following abbreviations: if $R_1, \dots, R_n$ are roles, and $C$ is a concept, then we define $(\forall R_1 \sqcup \dots \sqcup R_n.C) =_{def} (\forall R_1.C) \sqcap \dots \sqcap (\forall R_n.C)$ and $\exists R_1 \sqcup \dots \sqcup R_n.C =_{def} (\exists R_1.C) \sqcup \dots \sqcup (\exists R_n.C)$. Additionally, for some $CN \in \mathcal{N}_C$ we define $\top =_{def} CN \sqcup \neg CN$ and $\bot =_{def} CN \sqcap \neg CN$ (therefore, $\top^\mathcal{T} = \Delta^\mathcal{T}, \bot^\mathcal{T} = \emptyset$). + +The set of *roles* being used within a concept term $C$ is defined: + +**Definition 2 (Used Roles, roles($C$))** + +$$ \text{roles}(C) =_{def} \left\{ \begin{array}{ll} \emptyset & \text{if } C \in \mathcal{N}_C \\ \text{roles}(D) & \text{if } C = (\neg D) \\ \text{roles}(D) \sqcup \text{roles}(E) & \text{if } C = (D \sqcap E) \\ \text{or } C = (D \sqcup E) \\ \{R\} \sqcup \text{roles}(D) & \text{if } C = (\exists R.D) \\ \text{or } C = (\forall R.D) & \hfill \blacksquare \end{array} \right. $$ + +For example, $\text{roles}(\forall R.\exists S.SC \sqcap \exists T.D) = \{R, S, T\}$. + +As already noted, $\mathcal{ALC}_{RA\odot}$ provides role axioms of the form $S \circ T \subseteq R_1 \sqcup \dots \sqcup R_n$. More formally, the syntax of these role axioms is as follows: + +**Definition 3 (Role Axioms, Role Box)** If $S, T, R_1, \dots, R_n \in \mathcal{N}_R$, then the expression $S \circ T \subseteq R_1 \sqcup \dots \sqcup R_n$, $n \ge 1$, is called a *role axiom*. If $ra = S \circ T \subseteq R_1 \sqcup \dots \sqcup R_n$, then $\text{pre}(ra) =_{def} (S, T)$ and $\text{con}(ra) =_{def} \{R_1, \dots, R_n\}$. A finite set $\mathfrak{R}$ of role axioms is called a *role box*. Let $\text{roles}(ra) =_{def} \{S, T, R_1, \dots, R_n\}$, and $\text{roles}(\mathfrak{R}) =_{def} \bigcup_{ra \in \mathfrak{R}} \text{roles}(ra)$. $\square$ + +Additionally, a set of global *concept inclusion axioms (GCIs)* can be specified. A set of these GCIs is called a free TBox: \ No newline at end of file diff --git a/samples/texts/2395852/page_29.md b/samples/texts/2395852/page_29.md new file mode 100644 index 0000000000000000000000000000000000000000..356ac49a4c77d43e465acb2d000f1e4d945ecd37 --- /dev/null +++ b/samples/texts/2395852/page_29.md @@ -0,0 +1,33 @@ +**Definition 4 (Generalized Concept Inclusion Axiom, TBox)** If *C* and *D* are $\mathcal{ALC}_{RA}\ominus$ concepts, then the expression $C \dot{\sqsubseteq} D$ is called a generalized concept inclusion axiom, or GCI for short. A finite set of such GCIs is called a free TBox, $\mathfrak{T}$. We use $C \doteq D \in \mathfrak{T}$ as a shorthand for $\{C \dot{\sqsubseteq} D, D \dot{\sqsubseteq} C\} \subseteq \mathfrak{T}$. $\square$ + +The semantics of an $\mathcal{ALC}_{RA}\ominus$ concept is specified by giving a Tarski-style interpretation $\mathcal{I}$ that has to satisfy the following conditions: + +**Definition 5 (Interpretation)** An interpretation $\mathcal{I} =_{def} (\Delta^{\mathcal{I}}, \cdot^{\mathcal{I}})$ consists of a non-empty set $\Delta^{\mathcal{I}}$, called the domain of $\mathcal{I}$, and an interpretation function $\cdot^{\mathcal{I}}$ that maps every concept name to a subset of $\Delta^{\mathcal{I}}$, and every role name to a subset of $\Delta^{\mathcal{I}} \times \Delta^{\mathcal{I}}$. + +The interpretation function $\cdot^{\mathcal{I}}$ can then be extended to arbitrary concepts $C$ by using the following definitions (we write $X^{\mathcal{I}}$ instead of $\cdot^{\mathcal{I}}(X)$): + +$$ +\begin{align*} +(\neg C)^{\mathcal{I}} &=_{def} \Delta^{\mathcal{I}} \setminus C^{\mathcal{I}} \\ +(C \sqcap D)^{\mathcal{I}} &=_{def} C^{\mathcal{I}} \cap D^{\mathcal{I}} \\ +(C \sqcup D)^{\mathcal{I}} &=_{def} C^{\mathcal{I}} \cup D^{\mathcal{I}} \\ +(\exists R.C)^{\mathcal{I}} &=_{def} \{i \in \Delta^{\mathcal{I}} | \exists j \in C^{\mathcal{I}} : \in R^{\mathcal{I}}\} \\ +(\forall R.C)^{\mathcal{I}} &=_{def} \{i \in \Delta^{\mathcal{I}} | \forall j : \in R^{\mathcal{I}} \Rightarrow j \in C^{\mathcal{I}}\} \quad \square +\end{align*} +$$ + +It is therefore sufficient to provide the interpretations for the concept names and the roles, since the interpretation of every concept is uniquely determined then by using the definitions. + +In the following we specify under which conditions a given interpretation is a model of a syntactic entity (we also say an interpretation satisfies a syntactic entity): + +**Definition 6 (Model Relationship)** An interpretation $\mathcal{I}$ satisfies / is a model of a concept $C$, written $\mathcal{I} \models C$, iff $C^{\mathcal{I}} \neq \emptyset$. + +An interpretation $\mathcal{I}$ satisfies / is a model of a role axiom $S \circ T \subseteq R_1 \sqcup \cdots \sqcup R_n$, written $\mathcal{I} \models S \circ T \subseteq R_1 \sqcup \cdots \sqcup R_n$, iff $S^{\mathcal{I}} \circ T^{\mathcal{I}} \subseteq R_1^{\mathcal{I}} \cup \cdots \cup R_n^{\mathcal{I}}$. + +An interpretation $\mathcal{I}$ satisfies / is a model of a role box $\mathfrak{R}$, written $\mathcal{I} \models \mathfrak{R}$, iff for all role axioms $ra \in \mathfrak{R}: \mathcal{I} \models ra$. + +An interpretation $\mathcal{I}$ satisfies / is a model of a GCI $C \dot{\sqsubseteq} D$, written $\mathcal{I} \models C \dot{\sqsubseteq} D$, iff $C^{\mathcal{I}} \subseteq D^{\mathcal{I}}$. + +An interpretation $\mathcal{I}$ satisfies / is a model of a TBox $\mathfrak{T}$, written $\mathcal{I} \models \mathfrak{T}$, iff for all GCIs $g \in \mathfrak{T}: \mathcal{I} \models g$. + +An interpretation $\mathcal{I}$ satisfies / is a model of $(C, \mathfrak{R})$, written $\mathcal{I} \models (C, \mathfrak{R})$, iff $\mathcal{I} \models C$ and $\mathcal{I} \models \mathfrak{R}$. \ No newline at end of file diff --git a/samples/texts/2395852/page_4.md b/samples/texts/2395852/page_4.md new file mode 100644 index 0000000000000000000000000000000000000000..8bf0678700d4548445cdd9690c820d0c52e4b921 --- /dev/null +++ b/samples/texts/2395852/page_4.md @@ -0,0 +1,19 @@ +($C', \mathfrak{R}$) is satisfiable in $\mathcal{ALC}_{\mathcal{R}\mathfrak{A}\ominus}$ iff the original concept $C$ is.$^2$ + +$C''$ is constructed from $C$ as follows: The role $\oplus(R)$ in $C$ is replaced by the role $R_{\oplus}$. Then, for every role $R_{\oplus}$, we add the role axioms $\{R \circ R \subseteq R_{\oplus}, R_{\oplus} \circ R \subseteq R_{\oplus}\}$ to $\mathfrak{R}$. Please note that this only ensures $(\oplus(R))^\mathcal{I} = R^\mathcal{I} \cup R_{\oplus}^\mathcal{I}$, and not $(\oplus(R))^\mathcal{I} = R_{\oplus}^\mathcal{I}$, since $R^\mathcal{I} \not\subseteq R_{\oplus}^\mathcal{I}$. Therefore, in order to get an equi-satisfiable concept $C'$, we have to rewrite the original concept $C$ in the following way: + +$$ \exists \oplus (R).D \rightarrow \exists R_{\oplus}.D $$ + +$$ \exists R.D \rightarrow \exists R_{\oplus}.D \sqcap \exists R.D $$ + +$$ \forall \oplus (R).D \rightarrow \forall R_{\oplus}.D \sqcap \forall R.D $$ + +Now, $C'$ is satisfiable w.r.t. the role box $\mathfrak{R}$ iff $C$ is satisfiable. + +**$\mathcal{ALCH}_{R+}$**: The description logic $\mathcal{ALCH}_{R+}$ (see [13, 14]) extends $\mathcal{ALC}_{R+}$ by an additional set of role inclusion axioms of the form $R \subseteq S$, enforcing $R^\mathcal{I} \subseteq S^\mathcal{I}$ on the models $\mathcal{I}$. Adding the identity role $Id$ with the fixed semantics of the identity relationship $Id^\mathcal{I} =_{def} \{\langle x, x \rangle \mid x \in \Delta^\mathcal{I} \}$ to $\mathcal{ALC}_{RA\ominus}$ would obviously enable the simulation of these role inclusion axioms: for each role inclusion axiom $R \subseteq S$, add the role axiom $R \circ Id \subseteq S$ to a role box $\mathfrak{R}$ and consider the concept satisfiability w.r.t. $\mathfrak{R}$. Currently, neither $\mathcal{ALC}_{RA\ominus}$ nor $\mathcal{ALC}_{RA}$ provide the identity role. + +**Other Fragments of FOPL:** In the following we will briefly discuss whether decidability or undecidability of $\mathcal{ALC}_{RA\ominus}$ follows from already known results in logic, namely from results in bounded number of variables FOPL, or results from research carried out in the so-called (loosely) guarded fragment of FOPL. To the best of our knowledge, no previously known decidability resp. undecidability result is exploitable in the case of $\mathcal{ALC}_{RA\ominus}$. + +It is well-known that certain fragments of FOPL are decidable, for example, the class of all closed FOPL formulas containing at most two variables, denoted by $FO^2$. $FO^2$ has the finite model property – each satisfiable formula has a finite model. We already noted that one would need at least three variables if one translates $\mathcal{ALC}_{RA\ominus}$ role boxes and concepts into FOPL. In fact, there is no way even to express the transitivity axiom $\forall x, y, z : R(x, y) \land R(y, z) \Rightarrow R(x, z)$ in $FO^2$ (see [10]). If $FO^2$ is augmented by transitivity on an extra-logical level (since transitivity cannot be expressed within the language itself), $FO^2$ becomes undecidable, as Grädel et al. have shown (see [10]). However, the class $FO^2$ is much too large to capture the concept-side of $\mathcal{ALC}_{RA\ominus}$, since $\mathcal{ALC}$ concepts are expressible in a proper subset of $FO^2$, namely $GF_\beta^2$, see below. Recall that $\mathcal{ALC}_{R+}$ is decidable. + +$^2$It follows that if $\mathcal{ALC}_{RA\ominus}$ was decidable it would be EXPTIME-hard, since $\mathcal{ALC}_\oplus$ is EXPTIME-complete. \ No newline at end of file diff --git a/samples/texts/2395852/page_7.md b/samples/texts/2395852/page_7.md new file mode 100644 index 0000000000000000000000000000000000000000..0f6bf05571614abe3168dd0fef2c2f2d904df372 --- /dev/null +++ b/samples/texts/2395852/page_7.md @@ -0,0 +1,19 @@ +Figure 4: Illustration of $\forall a, b, c : EC(a, b) \land EC(b, c) \Rightarrow (DC(a, c) \lor EC(a, c) \lor PO(a, c) \lor TPP(a, c) \lor TPPI(a, c))$ + +means of a so called *composition table* that lists, given the “column” relationship $R(a, b)$ and the “row” relationship $S(b, c)$, all possible relationships $T_1(a, c)$, $T_2(a, c), ..., T_n(a, c)$ that may hold between $a$ and $c$. For example, in the case of RCC8, the composition table contains the entry $\{DC, EC, PO, TPP, TPPI\}$, given the relationship *EC* for the row as well as for the column – please consider Figure 4. This corresponds to the FOPL axiom $\forall a, b, c : EC(a, b) \land EC(b, c) \Rightarrow (DC(a, c) \lor EC(a, c) \lor PO(a, c) \lor TPP(a, c) \lor TPPI(a, c))$, which is equivalent to the role axiom $EC \circ EC \sqsubseteq DC \sqcup EC \sqcup PO \sqcup TPP \sqcup TPPI$. + +Usually, also the *disjointness* of the base relations must be captured. As already noted, $\mathcal{ALC}_{RA\ominus}$ lacks this expressiveness (and it cannot be simulated by means of other constructs easily, see below), but $\mathcal{ALC}_{RA}$ does not. For an adequate modeling of spatial relationships, also *inverse roles* must be taken into account. For example, the RCC8 relationship *TPPI* is the inverse of *TPP*, and *NTPPI* is the inverse of *NTPP*. Of course, $TPP^I = (TPPI^I)^{-1}$ and $NTPP^I = (NTPPI^I)^{-1}$ should be ensured. However, both $\mathcal{ALC}_{RA}$ and $\mathcal{ALC}_{RA\ominus}$ lack inverse roles, since undecidability would follow immediately then by previously known undecidability results. Since we use $\mathcal{ALC}_{RA\ominus}$ in the following example, we can neither rely on $TPP^I = (TPPI^I)^{-1}$ nor on the disjointness of roles. + +The possibility to approximate composition tables, which are very widely used in the field of relation algebra-based knowledge representation and reasoning, is the distinguishing feature of $\mathcal{ALC}_{RA\ominus}$ and $\mathcal{ALC}_{RA}$. Usually, the $\mathcal{ALC}_{RA}$ approximation will be better, since the disjointness of the base relations is also enforced. Nevertheless, as the example demonstrates, we can still solve some interesting spatial reasoning task using $\mathcal{ALC}_{RA\ominus}$. Consider the following TBox: + +$$ +\begin{array}{lcl} +\textit{circle} & \dot{\sqsubseteq} & \textit{figure} \\ +\textit{figure touching } a\textit{-figure} & \doteq & \textit{figure} \sqcap \exists \textit{EC}. \textit{figure} \\ +\textit{special figure} & \doteq & \textit{figure} \sqcap \\ +& & \forall \textit{PO}. \neg \textit{figure} \sqcap \\ +& & \forall \textit{NTPPI}. \neg \textit{figure} \sqcap \\ +& & \forall \textit{TPPI}. \neg \textit{circle} \sqcap \\ +& & \exists \textit{TPPI}. (\textit{figure} \sqcap \exists \textit{EC}. \textit{circle}) +\end{array} +$$ \ No newline at end of file diff --git a/samples/texts/2395852/page_9.md b/samples/texts/2395852/page_9.md new file mode 100644 index 0000000000000000000000000000000000000000..d369ba49cd0d45ae4c4828eb9682b7f8ad71ef76 --- /dev/null +++ b/samples/texts/2395852/page_9.md @@ -0,0 +1,11 @@ +stricted $\mathcal{ALCRP}(D)$. The strong syntactic requirements make modeling with $\mathcal{ALCRP}(D)$ much more complicated, and many interesting spatial-reasoning tasks cannot be addressed within the decidable fragment. One the other hand, the special $\mathcal{ALCRP}(D)$ instantiation $\mathcal{ALCRP}(S_2)$ captures the semantics of the RCC8 spatial relationships much more appropriately than it would be possible with $\mathcal{ALC}_{RA}\ominus$ inverse roles are present and disjointness is ensured as well. For example, see [12] for an $\mathcal{ALCRP}(S_2)$ spatial reasoning application. However, $\mathcal{ALCRP}(S_2)$ suffers from the same strong syntax restrictions which nearly make it impossible to address more complex spatial reasoning tasks. One of the motivations for our work on $\mathcal{ALC}_{RA}$ and $\mathcal{ALC}_{RA}\ominus$ was to create a logic that might be used more freely than $\mathcal{ALCRP}(S_2)$ for spatial modeling and reasoning. + +# 5 Proving Undecidability of $\mathcal{ALC}_{RA}\ominus$ + +The structure of the proof is as follows: first we show that the intersection problem for a special class of context-free grammars – so called concatenation grammars – is undecidable. Then we show that the intersection problem of concatenation grammars could be solved iff the satisfiability problem of $\mathcal{ALC}_{RA}\ominus$ was decidable. This obviously shows that the latter must be undecidable as well, since the former is. It should be noted that the underlying idea of the proof given below is nearly identical to the idea exploited in the proof given by Ganzinger et al. in [7] for showing the undecidability of $LGF_-$ with one transitive relation. However, the proof has been found independently, and for different classes of languages ($\mathcal{ALC}_{RA}\ominus$ is not in $LGF$). We start with some basic definitions needed for the proofs: + +**Definition 9 (Context-Free Grammar, Language)** A context-free grammar $G$ is a quadruple $(V, \Sigma, P, S)$, where $V$ is a finite set of variables or non-terminal symbols, $\Sigma$ is finite alphabet of terminal symbols with $V \cap \Sigma = \emptyset$, and $P \subseteq V \times (V \cup \Sigma)^+$ is a set of productions or grammar rules. $S \in V$ is the start variable. The language generated by a context-free grammar $G$ is defined as $\mathcal{L}(G) = \{w \mid w \in \Sigma^*, S \xrightarrow{*} w\}$ (see [21]). In the following, we will only consider languages with $\epsilon \notin \mathcal{L}(G)$ (therefore we write $\mathcal{L}(G) = \{w \mid w \in \Sigma^+, S \xrightarrow{+} w\}$). $\square$ + +**Definition 10 (Intersection-Problem for Languages)** Let $\mathcal{L}_1$ and $\mathcal{L}_2$ be formal languages (e.g. context-free languages). The intersection problem is to decide whether $\mathcal{L}_1 \cap \mathcal{L}_2$ is empty or not. $\square$ + +For lack of a better name we will consider special context-free grammars that we call *concatenation grammars* (for reasons that will become clear later): \ No newline at end of file diff --git a/samples/texts/3220451/page_1.md b/samples/texts/3220451/page_1.md new file mode 100644 index 0000000000000000000000000000000000000000..a62efdf5cd26210fda593ea11159c308940df03f --- /dev/null +++ b/samples/texts/3220451/page_1.md @@ -0,0 +1,30 @@ +Intermittent Fault Detection for Nonlinear +Stochastic Systems + +Yichun Niu*, Li Sheng*, Ming Gao*, Donghua Zhou** + +* College of Control Science and Engineering, China University of Petroleum (East China), Qingdao, 266580, China. Corresponding author: Li Sheng. (email: shengli@upc.edu.cn). + +** College of Electrical Engineering and Automation, Shandong University of Science and Technology, Qingdao 266590, China. + +**Abstract:** In this paper, the problem of intermittent fault detection is investigated for nonlinear stochastic systems. The moving horizon estimation with dynamic weight matrices is proposed, where the weight matrices are adjusted by an unreliability index of prior estimate to avoid the smearing effects of intermittent faults. Based on the particle swarm optimization algorithm, the nonlinear optimization problem is solved and the approximate estimate is derived. Finally, the feasibility and effectiveness of the proposed algorithm are validated by a numerical example. + +**Keywords:** Intermittent fault detection, Nonlinear stochastic systems, Moving horizon estimation, Dynamic weight matrices, Particle swarm optimization. + +# 1. INTRODUCTION + +For the sake of strengthening the reliability and safety of industrial processes, during the past several decades, tremendous effort has been devoted to the study of fault diagnosis techniques and a large number of research results have been effectively applied in various fields, such as chemical processes, aerospace systems, power systems and so on, see Fazai et al. (2019); Mandal et al. (2019); Shen et al. (2019). Nevertheless, it should be pointed out that most existing literature has concentrated on permanent faults, while little attention has been paid on another kind of common faults, intermittent faults (IFs). Different from permanent faults, a IF usually recurs by the same reason and lasts within a limited period of time. Since the appearing and disappearing times of IFs are nonde-terministic, the system can recover without fault-tolerant operations (Rashid et al. (2015)). Nonetheless, if IFs are not treated properly and promptly, the destructiveness of IFs may become larger over time and finally lead to major accidents (Correcher et al. (2012)). In fact, in power systems, mechanical equipment, electrical industries and many other engineering applications with electronics, the occurrence frequency of IFs is much larger than permanent faults. Therefore, it is an urgent need to develop the fault diagnosis methods for IFs. + +Generally speaking, the objective of fault diagnosis consists of fault detection, isolation and estimation, which respectively study the time, location and size of faults. It should be noted that the IF detection is more difficult than the permanent fault, since its aim is to detect all appearing + +and disappearing times of IFs. Especially for the detection of disappearing times, the residual is affected by previous IFs and then remains above the threshold for an uncertain period of time, which is the so-called smearing effects of IFs. Up to now, there have been some research results on the IF detection based on qualitative or quantitative analysis methods, see Constantinescu (2008); Correcher et al. (2012); Kim (2009); Yan et al. (2018, 2016). For example, in Yan et al. (2018) and Yan et al. (2016), the intermittent actuator and sensor fault detection problems for linear stochastic systems have been investigated, respectively. + +On the other hand, it is well known that nonlinearity pervasively exists in almost all dynamic systems. In order to solve the fault detection for nonlinear systems, fruitful methods have been proposed by a variety of communities. These methods include, but are not limited to, the extended Kalman filter (EKF) method (Wang et al. (2019)), particle filter (PF) method (Daroogheh et al. (2018); Yin and Zhu (2015)), strong tracking filter (STF) method (Qin et al. (2016)). However, after a thorough literature search, it has been revealed that, for IFs in nonlinear systems, the corresponding research results on the fault detection are still in the blank. + +In order to fill the research gap of existing literature, this paper studies the IF detection for nonlinear systems with stochastic noises. The main contributions are listed as follows. +1. *This paper represents the first of few attempts to investigate the IF detection problem for nonlinear systems.* +2. *By means of the moving horizon estimation with dynamic weight matrices (MHEDWM), the smearing effects of IFs are properly suppressed.* + +The rest of this paper is organized as follows. Section 2 gives the problem description about the IF detection for nonlinear systems and analyzes the deficiencies of existing methods for detecting IFs. Section 3 proposes the MHED- + +* This work is supported by National Natural Science Foundation of China (Nos. 61773400, 61751307), Key Research and Development Program of Shandong Province (No. 2019GGX101046), Fundamental Research Funds for the Central Universities of China (No. 19CX02044A), and Research Fund for the Taishan Scholar Project of Shandong Province of China. \ No newline at end of file diff --git a/samples/texts/3220451/page_3.md b/samples/texts/3220451/page_3.md new file mode 100644 index 0000000000000000000000000000000000000000..3384847223233680bcab85086d1164cd6a13b1f9 --- /dev/null +++ b/samples/texts/3220451/page_3.md @@ -0,0 +1,128 @@ +Then it can be found that there exist two following main +difficulties for the IF detection of nonlinear systems. + +(1) In traditional fault detection methods for nonlinear systems, such as EKF, PF, STF and so on, it can be only ensured that the designed residual is larger than the detection threshold after the fault appears. However, when the fault disappears, owing to the smearing effects, it is hard to guarantee that the residual is smaller than the threshold, see Fig. 1. + +(2) The model linearization method cannot be applied to the IF detection of nonlinear systems, due to the fact that the omitted high order terms of Taylor expansion maybe larger than the reserved lower order terms after the fault occurs. Thus, the existing IF detection methods for linear systems are unsuitable to be extended to nonlinear systems by the model linearization. Similarly, EKF, STF and other similar methods containing Taylor expansion approximation will also meet such a problem. By employing the imprecise approximation model, the estimation error may tend to diverge, see Fig. 2. + +Based on the above analysis, it can be seen that the IF detection for nonlinear systems is a quite challenging problem, which cannot be properly solved by the existing methods. Hence, the main objective of this paper is to design a new algorithm to deal with such a problem. + +3. IF DETECTION ALGORITHM + +In this section, the following algorithm of MHEDWM is designed, where for each time $k \ge N$ ($N < \min\{\bar{d}_1 - 1, \bar{d}_2 - 1\}$), the system state $x(k - N)$ is estimated depending on the past measurement outputs $\{y(k-i)\}_{0\le i \le N}$. For facilitating the understanding, we respectively define $\hat{x}(k-N)$ and $\bar{x}(k-N|k) = g(\hat{x}(k-1-N))$ as the posteriori estimate and prior estimate of $x(k-N)$. Construct the following quadratic cost function (QCF) + +$$ +\begin{equation} +\begin{aligned} +\mathcal{J}(k, \hat{x}(k-N|k)) = & \| \hat{x}(k-N|k) - \bar{x}(k-N|k) \|_{P(k)}^2 \\ +& + \sum_{i=0}^{N} \| y(k-i) - \hat{y}(k-i|k) \|_{Q(k)}^2, +\end{aligned} +\tag{3} +\end{equation} +$$ + +where $P(k) \ge 0$ and $Q(k) \ge 0$ is a set of weight matrices +to be designed, $\hat{y}(k-i|k) = h(\hat{x}(k-i|k))$ ($0 \le i \le N$) and +$\hat{x}(k-i+1|k) = g(\hat{x}(k-i|k))$ ($1 \le i \le N$). Therefore, the +desired estimate $\hat{x}(k-N)$ can be derived by solving the +following optimization problem (OP) + +$$ +\hat{x}(k-N) = \arg \min_{\hat{x}(k-N|k)} \mathcal{J}(k, \hat{x}(k-N|k)). \quad (4) +$$ + +In this paper, an unreliability index of prior estimate $\bar{x}(k-N|k)$ is designed as follows + +$$ +\rho(k) = \| \sigma(k) \|^{2}, \qquad (5) +$$ + +where + +$$ +\begin{align*} +\sigma(k) &= Y(k) - \bar{Y}(k|k), \\ +Y(k) &= [y^T(k-N), \dots, y^T(k)]^T, \\ +\bar{Y}(k|k) &= [\bar{y}^T(k-N|k), \dots, \bar{y}^T(k|k)]^T, \\ +\bar{y}(k-i|k) &= h(\bar{x}(k-i|k)), \quad 0 \le i \le N, \\ +\bar{x}(k-i+1|k) &= g(\bar{x}(k-i|k)), \quad 1 \le i \le N. +\end{align*} +$$ + +In order to avoid the smearing effects of IFs, the prior esti- +mate $\bar{x}(k - N|k)$ should be properly discarded during the +estimation process, which can be achieved by regulating +the weight matrices $P(k)$ and $Q(k)$. Then the following +rules are established + +1) If $\rho(k) \le \underline{\rho}$, let $P(k) = I$ and $Q(k) = 0$; + +2) If $\rho(k) \ge \bar{\rho}$, let $P(k) = 0$ and $Q(k) = I$; + +3) If $\underline{\rho} < \rho(k) < \bar{\rho}$, let $P(k) = \beta(k)I$ and $Q(k) = (1 - \beta(k))I$, where $\beta(k) = (\bar{\rho} - \rho(k))/(1 - \bar{\rho})$, + +where $\bar{\rho} \ge \underline{\rho} \ge 0$ are given scalars related to the stochastic noises. + +Defining $g^{(i)}(x) = g(g^{(i-1)}(x))$ $(i \in \mathbb{N}^{+})$ and $g^{(0)}(x) = x$, one has + +$$ +\hat{x}(k - N + i|k) = g^{(i)}(\hat{x}(k - N|k)). \quad (6) +$$ + +Then the QCF (3) can be rewritten as the following form + +$$ +\begin{equation} +\begin{split} +\mathcal{J}(k, \hat{x}(k-N|k)) = {}& \| \hat{x}(k-N|k) - \bar{x}(k-N|k) \|_{P(k)}^2 \\ +& + \| Y(k) - \hat{Y}(k|k) \|_{\tilde{Q}(k)}^2, +\end{split} +\tag{7} +\end{equation} +$$ + +where + +$$ +\tilde{Q}(k) = \operatorname{diag}\left\{Q(k), \cdots, Q(k)\right\}_{N+1}, +$$ + +$$ +\begin{align*} +\hat{Y}(k|k) &= [\hat{y}^T(k-N|k), \dots, \hat{y}^T(k|k)]^T \\ +\hat{y}(k-N+i|k) &= h(g^{(i)}(\hat{x}(k-N|k))), \quad i=0, \dots, N. +\end{align*} +$$ + +It can be found that for time instant $k > N$, the QCF $\mathcal{J}(k, \hat{x}(k-N|k))$ is a nonlinear function of $\hat{x}(k-N|k)$, which is related to functions $g(\cdot)$ and $h(\cdot)$. For general nonlinear functions $g(\cdot)$ and $h(\cdot)$, it is hardly possible to give a precise analytical solution of the nonlinear OP (4). Hence, in this paper, the particle swarm optimization (PSO) algorithm is introduced to search for an approximate solution $\hat{x}^\circ(k-N)$ of OP (4). Defining the residual + +$$ +r(k) = \hat{x}^{\circ}(k - N) - g(\hat{x}^{\circ}(k - 1 - N)), \quad (8) +$$ + +the evaluation function $J(k)$ and threshold $J_{\text{th}}$ can be +given as follows + +$$ +J(k) = \sum_{l=0}^{L-1} \|r(k-l)\|^2, \qquad (9) +$$ + +$$ +J_{\text{th}} = \sup_{f(k)=0} J(k), \qquad (10) +$$ + +where *L* is a given positive integer satisfying *N* + *L* < max{*d*1 − 1, *d*2 − 1}. The IF *f*(*k*) can be detected by the following test + +$$ +\begin{cases} +J(k) \geq J_{\text{th}} \\ +J(k) < J_{\text{th}} +\end{cases} +\Rightarrow +\begin{array}{l} +\text{faults occur} \\ +\text{no faults.} +\end{array} +$$ + +**Remark 2.** As is known to all, the prior estimate derived by the previous estimates plays an important role in traditional estimation methods, such as Kalman filter, PF, Luenberger observer and so on. When the fault disappears, the posteriori estimate is still affected by IFs existing in \ No newline at end of file diff --git a/samples/texts/3220451/page_5.md b/samples/texts/3220451/page_5.md new file mode 100644 index 0000000000000000000000000000000000000000..773d539055391025bca907e857541cdad58b0dd9 --- /dev/null +++ b/samples/texts/3220451/page_5.md @@ -0,0 +1,29 @@ +Fig. 6. Alarm times of IF in the case of $f_a = 1$ + +Fig. 7. IF and residuals in the case of $f_a = 2$ + +Fig. 8. Alarm times of IF in the case of $f_a = 2$ + +## 5. CONCLUSIONS AND PERSPECTIVES + +In this paper, the IF detection problem for nonlinear stochastic systems has been investigated based on the moving horizon estimation (MHE) algorithm. By introducing the unreliability index of prior estimate, the weight matrices in MHE has been dynamically adjusted, which + +can avoid the smearing effects of IFs. The simulation has shown the proposed MHEDWM can guarantee the accuracy of estimator, in the meantime detect all appearing and disappearing times of IFs. + +Further research topics include 1) the convergence analysis for the estimation error of MHEDWM; 2) the reduction of the calculation load for nonlinear OP; 3) the simplification of QCF. + +## REFERENCES + +- Constantinescu, C. (2008). Intermittent faults and effects on reliability of integrated circuits. In *2008 Annual Reliability and Maintainability Symposium*, 370–374. +- Correcher, A., Garcia, E., Morant, F., Quiles, E., and Rodriguez, L. (2012). Intermittent failure dynamics characterization. *IEEE Transactions on Reliability*, 61(3), 649–658. +- Daroogheh, N., Meskin, N., and Khorasani, K. (2018). A dual particle filter-based fault diagnosis scheme for nonlinear systems. *IEEE Transactions on Control Systems Technology*, 26(4), 1317–1334. +- Fazai, R., Mansouri, M., Abodayeh, K., Nounou, H., and Nounou, M. (2019). Online reduced kernel pls combined with glrt for fault detection in chemical systems. *Process Safety and Environmental Protection*, 128, 228–243. +- Kim, C.J. (2009). Electromagnetic radiation behavior of low-voltage arcing fault. *IEEE Transactions on Power Delivery*, 24(1), 416–423. +- Mandal, S., Santhi, B., Sridhar, S., Vinolia, K., and Swaminathan, P. (2019). Sensor fault detection in nuclear power plants using symbolic dynamic filter. *Annals of Nuclear Energy*, 134, 390–400. +- Qin, X., Fang, H., and Liu, X. (2016). Strong tracking filter-based fault diagnosis of networked control system with multiple packet dropouts and parameter perturbations. *Circuits Systems and Signal Processing*, 35(7), 2331–2350. +- Rashid, L., Pattabiraman, K., and Gopalakrishnan, S. (2015). Characterizing the impact of intermittent hardware faults on programs. *IEEE Transactions on Reliability*, 64(1), 297–310. +- Shen, Q., Yue, C., Goh, C.H., and Wang, D. (2019). Active fault-tolerant control system design for spacecraft attitude maneuvers with actuator saturation and faults. *IEEE Transactions on Industrial Electronics*, 66(5), 3763–3772. +- Wang, T., Liu, L., Zhang, J., Schaeffer, E., and Wang, Y. (2019). A M-EKF fault detection strategy of insulation system for marine current turbine. *Mechanical Systems and Signal Processing*, 115, 269–280. +- Yan, R., He, X., Wang, Z., and Zhou, D.H. (2018). Detection, isolation and diagnosability analysis of intermittent faults in stochastic systems. *International Journal of Control*, 91(2), 480–494. +- Yan, R., He, X., and Zhou, D. (2016). Detecting intermittent sensor faults for linear stochastic systems subject to unknown disturbance. *Journal of the Franklin Institute*, 353(17), 4734–4753. +- Yin, S. and Zhu, X. (2015). Intelligent particle filter and its application to fault detection of nonlinear system. *IEEE Transactions on Industrial Electronics*, 62(6), 3852–3861. \ No newline at end of file diff --git a/samples/texts/3314066/page_1.md b/samples/texts/3314066/page_1.md new file mode 100644 index 0000000000000000000000000000000000000000..84f692e5bdb7ce82c179e203d485320d83fb8f6d --- /dev/null +++ b/samples/texts/3314066/page_1.md @@ -0,0 +1,39 @@ +# Half-Gain Tuning for Active Disturbance Rejection Control + +Gernot Herbst* Arne-Jens Hempel** Thomas Göhrt** +Stefan Streif** + +* Siemens AG, Clemens-Winkler-Str. 3, 09116 Chemnitz, Germany +** Technische Universität Chemnitz, 09107 Chemnitz, Germany + +**Abstract:** A new tuning rule is introduced for linear active disturbance rejection control (ADRC), which results in similar closed-loop dynamics as the commonly employed bandwidth parameterization design, but with lower feedback gains. In this manner the noise sensitivity of the controller is reduced, paving the way for using ADRC in more noise-affected applications. It is proved that the proposed tuning gains, while rooted in the analytical solution of an algebraic Riccati equation, can always be obtained from a bandwidth parameterization design by simply halving the gains. This establishes a link between optimal control and pole placement design. + +**Keywords:** Disturbance rejection (linear case); Lyapunov methods; Observers for linear systems; Time-invariant systems. + +## 1. INTRODUCTION + +ADRC was developed as a nonlinear general-purpose controller by Han (2009). A linear variant was proposed by Gao (2003), facilitating stability analysis and significantly reducing the number of tuning parameters with the introduction of the “bandwidth parameterization” approach. + +The seemingly unorthodox use of elements from modern control theory for creating an almost model-free controller from the user’s point of view is key to its appraisal as a “paradigm change”, cf. Gao et al. (2001); Gao (2006), and a differentiator to other model-free approaches, e. g. as introduced by Fliess and Join (2013). And indeed, the ease of tuning, its performance compared to traditional (PID-type) control, and its extendability with features desirable for industrial applications as in Herbst (2016) and Madoński et al. (2019), make it an attractive alternative for real-world control problems, cf. Zheng and Gao (2018). + +The core of ADRC is formed by an observer, which is denoted as “extended state observer” (ESO) and puts emphasis on rejecting disturbances in a broader sense. There are further possible extensions to the observer, such as tracking disturbance derivatives using Generalized Proportional Integral (GPI) observers, cf. Sira-Ramírez et al. (2017). However, we will focus on the (arguably) most common variant of the ESO, which incorporates a single lumped (“total”) disturbance state modeling both unknown dynamics and piecewise constant input disturbance signals of the plant. + +In this paper, we will explore the use of the so-called $\alpha$-controller design for tuning the observer and control loop within linear ADRC. It was put forward by Buhl and Lohmann (2009) and is based on the solution of an algebraic Riccati equation to obtain feedback gains leading to an exponentially decaying Lyapunov function for the controlled system. A similar approach has been proposed + +by Zhou et al. (2008), denoted as “low gain feedback”. As a matter of fact, applying $\alpha$-controller design to ADRC will lead to reduced controller/observer gains compared to the established bandwidth parameterization approach, which will in turn reduce noise sensitivity of the resulting design. + +The main contribution of this work is the introduction of a new ADRC tuning rule, which we will denote as “half-gain tuning”. We will show that $\alpha$-controller design aiming at closed-loop dynamics similar to bandwidth parameterization will always lead to exactly halved gains for the controller and/or observer. Therefore, while grounded in an algebraic Riccati equation, an $\alpha$-controller design for ADRC can be trivially obtained from bandwidth parameterization, superseding the need for solving the former. For an example, detailed insights are given into the frequency-and time-domain behavior when using ADRC with half-gain tuning for the controller and/or observer. + +## 2. ACTIVE DISTURBANCE REJECTION CONTROL + +This section provides a very brief overview of continuous-time linear ADRC and the prevalent tuning method. For a more detailed introduction we refer to Herbst (2013). + +### 2.1 Idea and Structure of the Controller + +The essence of linear ADRC can be described as follows: + +(1) assume an $n$-th order integrator chain behavior $P(s) = b_0/s^n$ for a single-input single-output (SISO) plant of order $n$, regardless of its actual structure; + +(2) apply the inverted gain $1/b_0$ at the controller output to compensate for the plant gain $b_0$; + +(3) set up a full-order observer for the integrator chain model (estimated states $\hat{x}_1,...,n$), extended by a constant input disturbance (estimate $\hat{x}_{n+1}$) that captures both actual disturbances and model uncertainties (“extended state observer”, ESO); \ No newline at end of file diff --git a/samples/texts/3314066/page_2.md b/samples/texts/3314066/page_2.md new file mode 100644 index 0000000000000000000000000000000000000000..b28916308939c0fcfa66e4d485fa740d73716a71 --- /dev/null +++ b/samples/texts/3314066/page_2.md @@ -0,0 +1,101 @@ +(4) compensate the disturbance using the estimate $\hat{x}_{n+1}$; + +(5) design a full-order state-feedback controller for the remaining “pure” integrator chain $1/s^n$ to achieve the desired closed-loop dynamics. + +Control law and observer equations are illustrated in Figure 1, and given in (1) and (2) for the n-th order case. + +$$ +\begin{gathered} +u(t) = \frac{1}{b_0} \cdot (k_1 \cdot r(t) - (\mathbf{k}^\mathrm{T} \cdot 1) \cdot \hat{\mathbf{x}}(t)) \quad (1) \\ +\text{with } \mathbf{k}^\mathrm{T} = (k_1 \cdots k_n), \quad \hat{\mathbf{x}} = (\hat{x}_1 \cdots \hat{x}_{n+1})^\mathrm{T} +\end{gathered} +$$ + +$$ +\begin{gathered} +\dot{\mathbf{x}}(t) = \mathbf{A}_{\text{ESO}} \cdot \hat{\mathbf{x}}(t) + \mathbf{b}_{\text{ESO}} \cdot u(t) + \mathbf{l} \cdot (y(t) - \mathbf{c}_{\text{ESO}}^{\mathrm{T}} \cdot \hat{\mathbf{x}}(t)) \quad (2) \\ +\text{with } \mathbf{A}_{\text{ESO}} = \begin{pmatrix} \mathbf{0}^{n \times 1} & \mathbf{I}^{n \times n} \\ 0 & \mathbf{0}^{1 \times n} \end{pmatrix}, \quad \mathbf{b}_{\text{ESO}} = \begin{pmatrix} \mathbf{0}^{(n-1) \times 1} \\ b_0 \\ 0 \end{pmatrix}, \\ +\mathbf{l} = (l_1 \cdots l_{n+1})^\mathrm{T}, \quad \mathbf{c}_{\text{ESO}}^\mathrm{T} = (\mathbf{1} \ \mathbf{0}^{1 \times n}) +\end{gathered} +$$ + +Fig. 1. Control loop with ADRC, consisting of extended state observer (ESO) and state-feedback controller including disturbance compensation. Signals: controlled variable *y*, control action *u*, reference *r*; and possible disturbances at plant input (*d*) and output (*n*). + +## 2.2 Tuning by Bandwidth Parameterization + +Assuming perfect disturbance rejection and compensation of the plant gain $b_0$ by the control law (1), the state-feedback controller $\mathbf{k}^\mathrm{T}$ has to be designed for a “virtual” plant in form of a pure n-th order integrator chain: + +$$ +\begin{gathered} +\dot{\mathbf{x}}(t) = \mathbf{A} \cdot \mathbf{x}(t) + \mathbf{b} \cdot u(t), \quad y(t) = x_1(t) \quad (3) \\ +\text{with } \mathbf{A} = \begin{pmatrix} \mathbf{0}^{(n-1) \times 1} & \mathbf{I}^{(n-1) \times (n-1)} \\ 0 & \mathbf{0}^{1 \times (n-1)} \end{pmatrix}, \quad \mathbf{b} = \begin{pmatrix} \mathbf{0}^{(n-1) \times 1} \\ 1 \end{pmatrix}. +\end{gathered} +$$ + +The predominant controller design approach in linear ADRC is called “bandwidth parameterization”, cf. Gao (2003), and is using pole placement with all poles at $\lambda = -\omega_{\mathrm{CL}}$, the desired closed-loop bandwidth: + +$$ +\begin{align} +(\lambda + \omega_{\mathrm{CL}})^n &\stackrel{!}{=} \det (\lambda \mathbf{I} - (\mathbf{A} - \mathbf{b}\mathbf{k}^{\mathrm{T}})) && (4) \\ +&= \lambda^n + k_n \lambda^{n-1} + \dots + k_2 \lambda + k_1. && +\end{align} +$$ + +Binominal expansion of (4) leads to the controller gains: + +$$ k_i = \frac{n!}{(n-i+1)! (i-1)!} \omega_{\mathrm{CL}}^{n-i+1} \quad \forall i=1, \dots, n. \quad (5) $$ + +For tuning the extended state observer (ESO) with the bandwidth approach, we will follow the notation of Herbst + +(2013) in placing the closed-loop observer poles at $\lambda = -k_{\mathrm{ESO}} \cdot \omega_{\mathrm{CL}}$, with $k_{\mathrm{ESO}}$ being the relative factor between observer and control loop bandwidth: + +$$ +\begin{align} +(\lambda + k_{\mathrm{ESO}} \cdot \omega_{\mathrm{CL}})^{n+1} &\stackrel{!}{=} \det (\lambda \mathbf{I} - (\mathbf{A}_{\mathrm{ESO}} - l\mathbf{c}_{\mathrm{ESO}}^{\mathrm{T}})) && (6) \\ +&= \lambda^{n+1} + l_1\lambda^n + \dots + l_n\lambda + l_{n+1}. +\end{align} +$$ + +Binominal expansion of (6) yields the observer gains: + +$$ l_i = \frac{(n+1)!}{(n+1-i)! i!} (k_{\mathrm{ESO}} \cdot \omega_{\mathrm{CL}})^i, \quad \forall i=1, \dots, n+1. \quad (7) $$ + +Comparing these two tuning tasks for linear ADRC, we can conclude that—in both cases—only integrator chains are to be handled: of order *n* (for the closed-loop dynamics) and *n* + 1 (for the extended state observer). + +## 3. α-CONTROLLER APPROACH + +### 3.1 Brief Overview of the Tuning Method + +Buhl and Lohmann (2009) introduced the so-called α-controller approach, a design method leading to an exponentially decreasing Lyapunov function for the closed-loop system. The rate of decay α is the only design parameter of this method: + +$$ +\dot{V} = -\alpha V, \quad \text{with } \alpha > 0, \text{ and } V = x^{\mathrm{T}} P x. +$$ + +With a plant as in (3): + +$$ +\begin{align} +\dot{V} &= \left(\frac{\partial V}{\partial x}\right)^{\mathrm{T}} \dot{x} = 2x^{\mathrm{T}} P (Ax + bu) && (9) \\ +&= x^{\mathrm{T}} (A^{\mathrm{T}} P + PA) x + 2x^{\mathrm{T}} Pbu \\ +&= -\alpha x^{\mathrm{T}} P x. +\end{align} +$$ + +A suitable control law for achieving a negative $\dot{V}$ in (9) is: + +$$ u = -b^{\mathrm{T}}Px. +$$ + +Combining these two equations we obtain an algebraic Riccati equation: + +$$ (\mathbf{A} + \frac{\alpha}{2}\mathbf{I})^{\mathrm{T}}\mathbf{P} + \mathbf{P}(\mathbf{A} + \frac{\alpha}{2}\mathbf{I}) - 2\mathbf{P}\mathbf{b}\mathbf{b}^{\mathrm{T}}\mathbf{P} = 0. +$$ + +In summary, the state-feedback controller gains for the α-controller approach are $\mathbf{k}^\mathrm{T} = \mathbf{b}^\mathrm{T}\mathbf{P}$, with $\mathbf{P}$ being the solution of (11). + +### 3.2 Comparison to Bandwidth Parameterization + +When applying the α-controller tuning approach to a loop with the plant being an integrator chain, the closed-loop poles may be complex-valued, but will all have the same real part of $-\frac{\alpha}{2}$, cf. the proof in Buhl and Lohmann (2009). On the other hand, using the “bandwidth parameterization” pole placement design as given in (4), all closed-loop poles will be at $-\omega_{CL}$ and real-valued only. + +Being the respective counterparts of PI and PID controllers, first- and second-order ADRC are the most relevant cases in practical applications, resulting in tuning tasks for integrator chains of order up to three. When comparing the α-controller tuning results with pole placement (bandwidth parameterization for $\omega_{CL}$), two observations can be made: \ No newline at end of file diff --git a/samples/texts/3314066/page_3.md b/samples/texts/3314066/page_3.md new file mode 100644 index 0000000000000000000000000000000000000000..98ffb51caf31ed7f14b5a146af99180c9d328e5a --- /dev/null +++ b/samples/texts/3314066/page_3.md @@ -0,0 +1,27 @@ +(1) Selecting the tuning parameters of both methods as $\alpha = \omega_{CL}$ results in similar closed-loop dynamics for integrator chains of order two and above, with a slightly underdamped response for the $\alpha$-controller due to the complex-valued poles. For the second- and third-order case, the closed-loop step response achieved with these two methods is being compared in Figure 2. A first-order $\alpha$-controller design would be necessarily slower, since only one real-valued pole (at $-\frac{\alpha}{2}$, therefore at half the bandwidth of $\omega_{CL}$) can be placed. + +(2) When designing with $\alpha = \omega_{CL}$, the resulting controller gains of the $\alpha$-controller approach are exactly half of the controller gains obtained using pole placement with bandwidth parameterization. We will prove this relation in Section 4. + +Fig. 2. Comparison of the normalized closed-loop step responses using bandwidth parameterization (pole placement) and $\alpha$-controller design for full-order state-feedback control of integrator chain systems of order $n=2$ and $n=3$. + +The closed-loop pole configurations of $\alpha$-controller designs are presented in Figure 3, with the most important cases being: + +* $\lambda_{1/2} = \left(-\frac{1}{2} \pm \frac{1}{2}i\right) \cdot \alpha$ for the second-order integrator chain, and + +* $\lambda_1 = -\frac{1}{2} \cdot \alpha$, $\lambda_{2/3} = \left(-\frac{1}{2} \pm \frac{\sqrt{3}}{2}i\right) \cdot \alpha$ for the third-order integrator chain. + +Concluding this comparison: The $\alpha$-controller design leads to similar closed-loop dynamics for systems of order two and above, but with only half the controller gain compared + +Fig. 3. Closed-loop poles resulting from $\alpha$-controller design for integrator chain plants of orders $n=2$ to $n=5$. Bandwidth parameterization, by contrast, will always place all poles at $-\alpha$. + +to a pole placement design with bandwidth parameterization. Therefore the impact of measurement noise on the control action will be reduced, making the $\alpha$-controller design an interesting alternative for noise-affected systems, if the underdamped behavior is tolerable. + +## 4. THE HALF-GAIN TUNING METHOD FOR ADRC + +As pointed out in Section 3.2, there are up to three options of replacing bandwidth parameterization in linear ADRC with the $\alpha$-controller approach: (1) only for the controller (using $\alpha = \omega_{CL}$, for ADRC of order $n \ge 2$); (2) only for the observer (using $\alpha = k_{ESO} \cdot \omega_{CL}$, possible for any order $n \ge 1$); or (3) for both controller and observer (for $n \ge 2$). + +Applying $\alpha$-controller tuning to ADRC results in halved controller and/or observer gains, while maintaining similar (albeit slightly underdamped) dynamics for the control loop and/or the extended state observer. We will therefore denote this design approach for ADRC as the “half-gain tuning” method. + +This is the main result of this article, which will be proved in the following: To obtain the $\alpha$-controller gains, the Riccati equation (11) does not have to be solved. The gains can simply be obtained from the straightforward bandwidth tuning rules, i. e. equations (5) (controller) or (7) (observer), by halving these gains for a bandwidth $\omega_{CL} = \alpha$ (controller) or $k_{ESO} \cdot \omega_{CL} = \alpha$ (observer). + +**Theorem 1.** For plants as given in (3), the controller gains $k_{BW}^T$ obtained via bandwidth parameterization in (4) are related to the $\alpha$-controller gains $k_\alpha^T$ from (10), (11) by \ No newline at end of file diff --git a/samples/texts/3314066/page_4.md b/samples/texts/3314066/page_4.md new file mode 100644 index 0000000000000000000000000000000000000000..97726dde7ac1eacc84bd4b4ffec7edef4f7d7ec9 --- /dev/null +++ b/samples/texts/3314066/page_4.md @@ -0,0 +1,65 @@ +an exact factor of two, if $k_{\text{BW}}^{\text{T}}$ has been designed for a bandwidth $\omega_{\text{CL}} = \alpha$: + +$$ k_{\alpha}^{\text{T}} = \frac{1}{2} \cdot k_{\text{BW}}^{\text{T}} = \frac{1}{2} \cdot (k_{\text{BW},1} \cdots k_{\text{BW},n}). \quad (12) $$ + +**Proof.** We start by rewriting (11) as follows, + +$$ \alpha \mathbf{P} = -(\mathbf{A}^{\mathrm{T}} \mathbf{P} + \mathbf{P} \mathbf{A}) + \mathbf{S}, \quad (13) $$ + +where, using (10) and (12), + +$$ \mathbf{S} = \frac{1}{2}\mathbf{k}_{\text{BW}}\mathbf{k}_{\text{BW}}^{\text{T}} = \frac{1}{2} \cdot (k_{\text{BW},1}\mathbf{k}_{\text{BW}} \cdots k_{\text{BW},n}\mathbf{k}_{\text{BW}}). \quad (14) $$ + +Since **A** is an upper shift matrix, **PA** will result in **P**'s columns **p**ᵢ being shifted: + +$$ \mathbf{P}\mathbf{A} = (\mathbf{0} \ \mathbf{p}_1 \ \cdots \ \mathbf{p}_{n-1}). \quad (15) $$ + +From the first column of (13) we obtain **p**₁, and, as an abbreviation, introduce **Φ**: + +$$ \begin{aligned} \alpha \mathbf{p}_1 &= -\mathbf{A}^{\mathrm{T}} \mathbf{p}_1 + \frac{k_{\mathrm{BW},1}}{2} \mathbf{k}_{\mathrm{BW}} \\ \mathbf{p}_1 &= (\alpha \mathbf{I} + \mathbf{A}^{\mathrm{T}})^{-1} \cdot \frac{k_{\mathrm{BW},1}}{2} \mathbf{k}_{\mathrm{BW}} = \mathbf{\Phi}^{-1} \cdot \frac{k_{\mathrm{BW},1}}{2} \mathbf{k}_{\mathrm{BW}}. \end{aligned} \quad (16) $$ + +For all other columns ($i=2, \dots, n$): + +$$ \begin{aligned} \alpha p_i &= -A^T p_i - p_{i-1} + \frac{k_{BW,i}}{2} k_{BW} \\ p_i &= -\Phi^{-1} \cdot p_{i-1} + \Phi^{-1} \cdot \frac{k_{BW,i}}{2} k_{BW}. \end{aligned} \quad (17) $$ + +We now recursively expand (17) for the final ($n$-th) column: + +$$ p_n = \sum_{i=1}^{n} (-1)^{(n-i)} \cdot \Phi^{-(n-i+1)} \cdot \frac{k_{\text{BW},i}}{2} k_{\text{BW}}. \quad (18) $$ + +$\mathbf{p}_n^\mathrm{T}$ is the gain vector of the $\alpha$-controller, since, recalling (10) with (3), $\mathbf{k}_\alpha^\mathrm{T} = \mathbf{b}^\mathrm{T}\mathbf{P} = \mathbf{b}^\mathrm{T}\mathbf{P}^\mathrm{T} = \mathbf{p}_n^\mathrm{T}$. Multiplying (18) with $\mathbf{\Phi}^n$ one obtains: + +$$ \mathbf{\Phi}^n \cdot \mathbf{p}_n = \left( \sum_{i=1}^{n} (-1)^{(n-i)} \cdot \mathbf{\Phi}^{(i-1)} \cdot k_{\text{BW},i} \right) \cdot \frac{1}{2} k_{\text{BW}}. \quad (19) $$ + +The characteristic polynomial of $\mathbf{\Phi}$ is: + +$$ \det (\lambda \mathbf{I} - \mathbf{\Phi}) = \det (\lambda \mathbf{I} - (\alpha \mathbf{I} + \mathbf{A}^{\mathrm{T}})) = (\lambda - \alpha)^n. \quad (20) $$ + +Comparing (20) with (4) and (5) when $\alpha = \omega_{\text{CL}}$ we find the characteristic polynomial to be: + +$$ \det (\lambda\mathbf{I} - \mathbf{\Phi}) = \lambda^n - \sum_{i=1}^{n} (-1)^{(n-i)} \cdot k_{\text{BW},i} \cdot \lambda^{i-1}. \quad (21) $$ + +This allows us to apply the Cayley-Hamilton theorem to (19), with $\mathbf{\Phi}^n = \sum_{i=1}^n (-1)^{(n-i)} \cdot (\mathbf{\Phi}^{(i-1)}) \cdot k_{\text{BW},i}$ we finally obtain: + +$$ p_n = k_\alpha = \frac{1}{2} k_{\text{BW}}. \quad (22) $$ + +This concludes the proof. As the analytical solution of the algebraic Riccati equation (11), it provides a link between optimal control and pole placement for linear ADRC. □ + +*Remark 2.* Due to the duality of the design problem, a proof of the half-gain relation for the extended state observer design (with $k_{\text{ESO}} \cdot \omega_{\text{CL}} = \alpha$) can be constructed in the same manner. + +## 5. EXAMPLES + +Aim of this section is to provide visual insights into an ADRC-based control loop when using half-gain tuning for the controller, the extended state observer, or both. For this purpose we can restrict ourselves to a second-order plant with normalized gain and eigenfrequency: + +$$ P(s) = \frac{1}{s^2 + 2s + 1}. \quad (23) $$ + +Since ADRC is almost insensitive to the damping ratio, especially of underdamped systems, cf. Herbst (2013), the informative value of our example will not be compromised by the particular choice of critical damping in $P(s)$. + +Bandwidth parameterization is applied to a second-order ADRC ($n=2$) using $\omega_{\text{CL}} = 1$ rad/s, $k_{\text{ESO}} = 10$, and $b_0 = 1$. Four cases are being compared: (1) unmodified bandwidth tuning, (2) applying half-gain tuning only to the outer control loop (“K/2 controller”), (3) applying half-gain tuning only to the ESO (“L/2 observer”), and (4) half-gain tuning for both controller and observer. + +### 5.1 Impact on Open-Loop Characteristics + +For stability and dynamics, the feedback controller part of an ADRC control loop is essential. In Figure 4 the transfer functions from controlled variable $y$ to control signal $u$ are compared for the four possible cases. Additionally, the loop gain transfer functions are being compared in Figure 5. + +Fig. 4. Comparison of the feedback controller transfer functions with or without half-gain tuning. + +The most interesting result might be that half-gain observer tuning (“L/2” case) provides significantly improved high-frequency damping while having almost no impact on the lower frequencies up to and including the crossover frequency. On the other hand one has to expect some low-frequency performance penalty when (additionally or solely) applying half-gain controller tuning (“K/2” cases). \ No newline at end of file diff --git a/samples/texts/3314066/page_5.md b/samples/texts/3314066/page_5.md new file mode 100644 index 0000000000000000000000000000000000000000..ddd931bcba61054e4cf4386c9c6f1d03f87738c4 --- /dev/null +++ b/samples/texts/3314066/page_5.md @@ -0,0 +1,53 @@ +Fig. 5. Comparison of the open-loop gain transfer function with or without half-gain tuning. + +## 5.2 Impact on Closed-Loop Characteristics + +With the control loop signals denoted as in Figure 1, the “gang of six” transfer functions are defined as + +$$ \begin{pmatrix} y(s) \\ u(s) \end{pmatrix} = \begin{pmatrix} G_{yr}(s) & G_{yd}(s) & G_{yn}(s) \\ G_{ur}(s) & G_{ud}(s) & G_{un}(s) \end{pmatrix} \cdot \begin{pmatrix} r(s) \\ d(s) \\ n(s) \end{pmatrix}, \quad (24) $$ + +providing frequency-domain insights for a two-degrees-of-freedom control loop as is the case with ADRC, cf. Åström and Murray (2008). For the four cases in our example, they are presented and discussed in Figure 6 and Figure 7. + +While not shown here for brevity, a discrete-time implementation of “K/2” and “L/2” design was successfully tested as well, exhibiting the desired noise reduction in the control signal $u$. + +# 6. CONCLUSION + +A new “half-gain tuning” rule for linear active disturbance rejection control (ADRC) based on the so-called $\alpha$-controller design was introduced. Compared to the common “bandwidth parameterization” approach, similar closed-loop dynamics can be achieved with lower (halved) feedback gains, therefore reducing the noise sensitivity of ADRC. + +In view of the examples presented in Section 5, a recommendation emerges to start with half-gain tuning for the observer. This has the least impact on the closed-loop dynamics compared to bandwidth parameterization, while already providing a significant reduction of control signal sensitivity to measurement noise. + +While being the analytical solution of an algebraic Riccati equation, the proposed feedback gains can simply be obtained from a bandwidth parameterization design by + +halving the gains, as proved in this paper, establishing a link between pole placement and optimal control. + +# ACKNOWLEDGEMENTS + +Gernot Herbst would like to thank Michael Buhl for drawing his attention to the $\alpha$-controller approach. + +# REFERENCES + +Åström, K.J. and Murray, R.M. (2008). *Feedback Systems: An Introduction for Scientists and Engineers*. Princeton University Press. + +Buhl, M. and Lohmann, B. (2009). Control with exponentially decaying Lyapunov functions and its use for systems with input saturation. In *2009 European Control Conference (ECC)*, 3148–3153. doi: 10.23919/ECC.2009.7074889. + +Fliess, M. and Join, C. (2013). Model-free control. *International Journal of Control*, 86(12), 2228–2252. doi: 10.1080/00207179.2013.810345. + +Gao, Z. (2003). Scaling and bandwidth-parameterization based controller tuning. In *Proceedings of the 2003 American Control Conference*, 4989–4996. doi: 10.1109/ACC.2003.1242516. + +Gao, Z. (2006). Active disturbance rejection control: A paradigm shift in feedback control system design. In *Proceedings of the 2006 American Control Conference*, 2399–2405. doi:10.1109/ACC.2006.1656579. + +Gao, Z., Huang, Y., and Han, J. (2001). An alternative paradigm for control system design. In *Proceedings of the 40th IEEE Conference on Decision and Control*. doi: 10.1109/CDC.2001.980926. + +Han, J. (2009). From PID to active disturbance rejection control. *IEEE Transactions on Industrial Electronics*, 56(3), 900–906. doi:10.1109/TIE.2008.2011621. + +Herbst, G. (2013). A simulative study on active disturbance rejection control (ADRC) as a control tool for practitioners. *Electronics*, 2(3), 246–279. doi: 10.3390/electronics2030246. + +Herbst, G. (2016). Practical active disturbance rejection control: Bumpless transfer, rate limitation, and incremental algorithm. *IEEE Transactions on Industrial Electronics*, 63(3), 1754–1762. doi: 10.1109/TIE.2015.2499168. + +Madoński, R., Shao, S., Zhang, H., Gao, Z., Yang, J., and Li, S. (2019). General error-based active disturbance rejection control for swift industrial implementations. *Control Engineering Practice*, 84, 218–229. doi: 10.1016/j.conengprac.2018.11.021. + +Sira-Ramírez, H., Luviano-Juárez, A., Ramírez-Neria, M., and Zurita-Bustamante, E.W. (2017). *Active Disturbance Rejection Control of Dynamic Systems: A Flatness Based Approach*. Butterworth-Heinemann. + +Zheng, Q. and Gao, Z. (2018). Active disturbance rejection control: Some recent experimental and industrial case studies. *Control Theory and Technology*, 16(4), 301–313. doi:10.1007/s11768-018-8142-x. + +Zhou, B., Duan, G., and Lin, Z. (2008). A parametric Lyapunov equation approach to the design of low gain feedback. *IEEE Transactions on Automatic Control*, 53(6), 1548–1554. doi:10.1109/TAC.2008.921036. \ No newline at end of file diff --git a/samples/texts/3314066/page_6.md b/samples/texts/3314066/page_6.md new file mode 100644 index 0000000000000000000000000000000000000000..727d156e203e53409e87302c616db8baf57af2a2 --- /dev/null +++ b/samples/texts/3314066/page_6.md @@ -0,0 +1,3 @@ +Fig. 6. Gang-of-six comparison (frequency domain) with or without half-gain tuning for controller and/or observer within ADRC. As predicted in Section 5.1, the half-gain observer (“L/2”) case provides enhanced high-frequency damping in $G_{un}(j\omega)$ almost without any side-effects on other performance criteria. The “K/2” cases, on the other hand, will—while still yielding some additional high-frequency damping in $G_{un}$—involve slower reaction to reference signal changes and more overshoot induced by disturbances at the plant input. + +Fig. 7. To ease and support the interpretability of Figure 6, a time-domain perspective is given in this figure with the step responses of the gang-of-six transfer functions with or without half-gain tuning. \ No newline at end of file diff --git a/samples/texts/3392042/page_12.md b/samples/texts/3392042/page_12.md new file mode 100644 index 0000000000000000000000000000000000000000..030f9ad71aa9394c56091ddf6e1a59f9e8611c89 --- /dev/null +++ b/samples/texts/3392042/page_12.md @@ -0,0 +1,43 @@ +**Theorem 7.** For $h \in \text{Def}(\mathbb{R}^n)$, + +$$ (21) \quad \int_{s=0}^{\infty} h \lfloor d\mu_k \rfloor = \int_{\sum_{n,k}}^{\infty} \mu_k\{h \ge s\} - \mu_k\{h < -s\} ds \quad \text{excursion sets} $$ + +$$ (22) \quad = \int_{\mathcal{P}_{n,n-k}} \int_P h \lfloor d\chi \rfloor d\lambda(P) \quad \text{slices} $$ + +$$ (23) \quad = \int_{G_{n,k}} \int_L \int_{\pi_L^{-1}(x)} h \lfloor d\chi \rfloor dx d\gamma(L) \quad \text{projections} $$ + +$$ (24) \quad = \int_{s=0}^{\infty} (\mathbf{C}^{\{h \ge s\}}(\omega_k) - \mathbf{C}^{\{h < -s\}}(\omega_k)) ds \quad \text{conormal cycle} $$ + +$$ (25) \quad = -\int h \lceil d\mu_k \rceil \quad \text{duality} $$ + +*Proof.* Note that for $T > 0$ sufficiently large and $N = mT$, + +$$ +\begin{align*} +\int h \lfloor d\mu_k \rfloor &= \lim_{m \to \infty} \frac{1}{m} \int \lfloor mh \rfloor d\mu_k = \lim_{m \to \infty} \frac{1}{m} \sum_{i=1}^{\infty} \mu_k\{mh \ge i\} - \mu_k\{mh < -i\} \\ +&= \lim_{N \to \infty} \frac{T}{N} \sum_{i=1}^{N} \mu_k\left\{h \ge \frac{iT}{N}\right\} - \mu_k\left\{h < -\frac{iT}{N}\right\} \\ +&= \int_0^T \mu_k\{h \ge s\} - \mu_k\{h < -s\} ds. +\end{align*} +$$ + +Thus, (21); the same proof using $\lceil d\mu_k \rceil$ implies that + +$$ (26) \quad \int h \lceil d\mu_k \rceil = \int_{s=0}^{\infty} \mu_k\{h > s\} - \mu_k\{h \le -s\} ds, $$ + +which, with (21), yields (25). For (22), + +$$ \int_0^\infty \mu_k\{h \ge s\} - \mu_k\{h < -s\} ds = \int_0^\infty \int_{P_{n,n-k}} \chi(\{h \ge s\} \cap P) - \chi(\{h < -s\} \cap P) d\lambda(P) ds. $$ + +This integral is well-defined, since the excursion sets $\{h \ge s\}$ and $\{h < -s\}$ are definable, and $h$ is bounded and of compact support. The Fubini theorem yields (22) via + +$$ \int_{P_{n,n-k}} \int_0^\infty \chi(\{h \ge s\} \cap P) - \chi(\{h < -s\} \cap P) ds d\lambda(P) = \int_{P_{n,n-k}} \int_P h \lfloor d\chi \rfloor d\lambda(P). $$ + +For (23), fix an $L \in G_{n,k}$ and let $\pi_L$ be the orthogonal projection map on to $L$. Then the affine subspaces perpendicular to $L$ are the fibers of $\pi_L$ and + +$$ \{P \in P_{n,n-k} : P^\perp L\} = \{\pi_L^{-1}(x) : x \in L\}. $$ + +Instead of integrating over $P_{n,n-k}$, integrate over the fibers of orthogonal projections onto all linear subspaces of $G_{n,k}$: + +$$ \int h \lfloor d\mu_k \rfloor = \int_{P_{n,n-k}} \int_P h \lfloor d\chi \rfloor d\lambda(P) = \int_{G_{n,k}} \int_L \int_{\pi_L^{-1}(x)} h \lfloor d\chi \rfloor dx d\gamma(L). $$ + +Finally, for (24), rewrite (21) by expressing the intrinsic volumes in terms of the conormal cycles, as in (12). □ \ No newline at end of file diff --git a/samples/texts/3392042/page_3.md b/samples/texts/3392042/page_3.md new file mode 100644 index 0000000000000000000000000000000000000000..382b5d977ab1c8ebff4d9678ab7551c9b35fbba4 --- /dev/null +++ b/samples/texts/3392042/page_3.md @@ -0,0 +1,29 @@ +FIGURE 3. An upper step function of h, depicted at left, composed with a decreasing function c, becomes a lower step function of c(h), depicted at right. As the step size approaches zero, we obtain Proposition 13. + +We may then rewrite the above sum as: + +$$ \int_{\mathbb{R}^n} \frac{1}{m} [\mathrm{mc}(h)] \, d\mu_k = \sum_{s \in S} c(s) \cdot \mu_k\{c(s) \le c(h) < c(s-\epsilon)\} = \sum_{s \in S} c(s) \cdot \mu_k\{s - \epsilon < h \le s\}, $$ + +where $\epsilon \to 0$ as $m \to \infty$ by continuity of c. In the limit, both sides are equal: + +$$ \lim_{\epsilon \to 0} \sum_{s \in S} c(s) \cdot \mu_k\{s - \epsilon < h \le s\} = \lim_{m \to \infty} \sum_{i \in Z} c\left(\frac{i}{m}\right) \cdot \mu_k\left\{\frac{i-1}{m} < h \le \frac{i}{m}\right\}. \quad \square $$ + +Proposition 13 implies that if $c : \mathbb{R} \to \mathbb{R}$ is increasing on some interval and decreasing on another, then the maps $v, u : \mathrm{Def}(\mathbb{R}^n) \to \mathbb{R}$ defined + +$$ v(h) = \int_{\mathbb{R}^n} c(h) [\, d\mu_k] \quad \text{and} \quad u(h) = \int_{\mathbb{R}^n} c(h) [\, d\mu_k] $$ + +are neither lower- nor upper-continuous. + +Lemma 12 and Proposition 13 provide a generalization of Hadwiger's Theorem: + +**Theorem 14.** Any $E_n$-invariant definably lower-continuous valuation $v : \mathrm{Def}(\mathbb{R}^n) \to \mathbb{R}$ is of the form: + +$$ (36) \qquad v(h) = \sum_{k=0}^{n} \left( \int_{\mathbb{R}^n} c_k \circ h [\, d\mu_k] \right), $$ + +for some $c_k \in C(\mathbb{R})$ continuous and monotone, satisfying $c_k(0) = 0$. Likewise, an upper-continuous valuation can be similarly written in terms of upper Hadwiger integrals. + +*Proof.* Let $v : \mathrm{Def}(\mathbb{R}^n) \to \mathbb{R}$ be a lower valuation, and $h \in \mathrm{Def}(\mathbb{R}^n)$. First approximate $h$ by lower step functions. That is, for $m > 0$, let $h_m = \frac{1}{m} [\mathrm{mh}]$. In the lower flat topology, $\lim_{m\to\infty} h_m = h$. On each of these step functions, Lemma 12 implies that $v$ is a linear combination of Hadwiger integrals: + +$$ (37) \qquad v(h_m) = \sum_{k=0}^{n} \int_{\mathbb{R}^n} c_k(h_m) \, d\mu_k. $$ + +for some $c_k : \mathbb{R} \to \mathbb{R}$ with $c_k(0) = 0$, depending only on $v$ and not on $m$. By Proposition 13, the $c_k$ must be increasing functions since we are approximating $h$ with lower step functions in the lower flat topology. \ No newline at end of file diff --git a/samples/texts/4011427/page_1.md b/samples/texts/4011427/page_1.md new file mode 100644 index 0000000000000000000000000000000000000000..5d21d5516d69b6658722a65e7522a3dfcb89858f --- /dev/null +++ b/samples/texts/4011427/page_1.md @@ -0,0 +1,17 @@ +# ERPOT: A quad-criteria scheduling heuristic to optimize the execution time, failure rate, power consumption and temperature in multicores + +Athena Abdi, Alain Girault, Hamid Zarandi + +## ► To cite this version: + +Athena Abdi, Alain Girault, Hamid Zarandi. ERPOT: A quad-criteria scheduling heuristic to optimize the execution time, failure rate, power consumption and temperature in multicores. [Research Report] RR-9196, Inria; 37. 2019, pp.1-37. hal-01848087v2 + +HAL Id: hal-01848087 + +https://hal.inria.fr/hal-01848087v2 + +Submitted on 5 Mar 2019 + +**HAL** is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. + +L'archive ouverte pluridisciplinaire **HAL**, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. \ No newline at end of file diff --git a/samples/texts/4011427/page_10.md b/samples/texts/4011427/page_10.md new file mode 100644 index 0000000000000000000000000000000000000000..631a7c3b9ef69b8db8b52eb4a6954ec55b6cd431 --- /dev/null +++ b/samples/texts/4011427/page_10.md @@ -0,0 +1,61 @@ +For each component *c*, we wish to take into account the effect of the temperature of its +neighbors, according to the chip floorplan. To achieve this, we add a heat transfer term to +Eq. (12): + +$$ +C \cdot \left( \frac{dT_c(t)}{dt} \right) + G(T_c(t) - T_{amb}) + 2D\_heat = P(t) \quad (13) +$$ + +and we use the coarse grain floorplan of Fig. 3(c) (similar to the spatial thermal model and +floorplan of [33]) to model this two-dimension heat transfer as: + +$$ +2D\_heat = \sum_{c' \in nbr(c)} \kappa(c, c') \cdot (T_c(t) - T_{c'}(t)) \quad (14) +$$ + +where $nbr(c)$ is the set of all neighbors of $c$, $T_c$ is the temperature of $c$, and $\kappa(c, c')$ is the thermal conductivity between $c$ and $c'$, which depends on their distance and on the chip geometry characteristics (as given in the floorplan). + +Combining Eqs. (13) and (14) yields the following differential equation for $T_c(t)$: + +$$ +\begin{equation} +\begin{split} +C \cdot \left( \frac{dT_c(t)}{dt} \right) = {}& -G \cdot (T_c(t) - T_{amb}) \\ +& - \sum_{c' \in nbr(c)} \kappa(c, c') \cdot (T_c(t) - T_{c'}(t)) \\ +& + C_{ef} \cdot V^2 \cdot f + \alpha \cdot T_c(t) + \beta_h +\end{split} +\tag{15} +\end{equation} +$$ + +We then proceed as in [8] to re-write Eq. (15) as: + +$$ +\begin{align*} +\frac{dT_c(t)}{dt} &= -A \cdot T_c(t) + B, && \text{with} \tag{16} \\ +A &= \frac{G - \alpha + \sum_{c' \in nbr(c)} \kappa(c, c')}{C} \\ +B &= \frac{G \cdot T_{amb} + \sum_{c' \in nbr(c)} \kappa(c, c') \cdot T_{c'}(t) + C_{ef} \cdot V^2 \cdot f + \beta_h}{C} +\end{align*} +$$ + +When computing the evolution of the temperature of *c* during the execution of *τ*, we assume +that the temperatures of the neighbors remain constant for the entire duration of *τ*, and equal +to their respective temperature at the end of *τ*. By virtue of the same reasoning as the one made +in Section 3.4, this is safe regarding the *Tobj* constraint. It follows that the closed form solution +to the differential equation (16) is: + +$$ +T_c(t) = T_{\infty}^{\text{heat}} + (T_0 - T_{\infty}^{\text{heat}}) \cdot e^{-A(t-t_0)} \quad (17) +$$ + +where $T_\infty^{\text{heat}} = B/A$ is the heating steady state temperature and $T_0 = T(t_0)$ is the temperature of the system at $t_0$. We note $T_c^{\text{heat}}(t_0,t)$ the temperature computed with Eq. (17). + +When *c* is idle, the computation of the temperature is identical except that the term $C_{ef} \cdot V^2 \cdot f$ +in Eq. (11) disappears and $\beta_h$ is replaced by $\beta_c$, yielding the following closed form: + +$$ +\begin{align*} +B' &= \frac{G \cdot T_{amb} + \sum_{c' \in nbr(c)} T_{c'}(t) \cdot \kappa(c, c') + \beta_c}{C} \\ +T_c(t) &= T_\infty^{cool} + (T_0 - T_\infty^{cool}) \cdot e^{-A(t-t_0)} \tag{18} +\end{align*} +$$ \ No newline at end of file diff --git a/samples/texts/4011427/page_11.md b/samples/texts/4011427/page_11.md new file mode 100644 index 0000000000000000000000000000000000000000..e1666ca5d40678fdc5d4653601f846564f56b99d --- /dev/null +++ b/samples/texts/4011427/page_11.md @@ -0,0 +1,29 @@ +where $T_{\infty}^{cool} = B'/A$ is the cooling steady state temperature and $T_0 = T(t_0)$ is the temperature of the system at $t_0$, i.e., at the start of the cooling time. We note $T_c^{cool}(t_0, t)$ the temperature computed with Eq. (18). + +# 4 ERPOT: The Proposed Quad-Criteria Optimization Scheduling Heuristic Method + +The optimal mapping of a DAG of tasks on a multicore is a known NP-complete problem [15]. We therefore propose a heuristic algorithm, more precisely a ready list scheduling, for which we formally prove that each computed schedule satisfies the GSFR, power consumption, and temperature constraints (Section 4.2). In addition, in order to assess the performances of our heuristic, we implement an optimal version on top of an ILP solver (Section 4.6). + +## 4.1 General principles of ERPOT + +We are given: + +(i) a DAG of tasks $\mathcal{Alg} = (\mathcal{V}, \mathcal{E})$, + +(ii) a multicore architecture description $\mathcal{Arc} = (\mathcal{C}, \mathcal{B}, \mathcal{L})$ along with the nominal failure rate per time unit $\lambda_0$ of each hardware component, + +(iii) a function $\mathcal{Exe}_{nom}$ of the nominal WCETs / WCCTs of all the tasks / data-dependencies of $\mathcal{Alg}$ onto all the cores / buses of $\mathcal{Arc}$, + +(iv) a set of frequencies for the cores $\mathcal{F} = \{f_j\}_{1 \le j \le l}$ and a fixed frequency $f_b$ for the buses, all taken as scaling factors, + +(v) three constraints $\Lambda_{obj}$, $P_{obj}$, and $T_{obj}$ respectively on the GSFR, the power consumption, and the temperature, + +(vi) and the initial temperature of the chip $T_{init}$. + +The goal is to compute, if it exists, a schedule of $\mathcal{Alg}$ onto $\mathcal{Arc}$ such that the three constraints are met and the execution time is minimal². If no solution is found, it means that the available hardware resources are not sufficient to meet the desired constraints $\Lambda_{obj}$, $P_{obj}$, and $T_{obj}$. This issue is discussed in Section 4.2. + +In order to keep the GSFR below $\Lambda_{obj}$, we use the *active replication* of tasks. We compute the reliability of a partial schedule by building the corresponding Reliability Block Diagram (RBD) [26]. An RBD is a DAG that starts with a source node S and ends with a destination node D. Between S and D, each of its nodes corresponds to one task (or data-dependency) scheduled on a core (or bus). By definition, an RBD is *operational* iff there exists at least one operational path from S to D. A path is operational iff all the blocks in this path are operational. The probability that a block is operational is its reliability, computed with Eq. (2). By construction, the probability that an RBD is operational is therefore equal to the reliability of the static schedule it represents. + +Computing the reliability in this way assumes that the occurrences of the failures are *statistically independent events*. Without this hypothesis, the fact that some blocks belong to several + +²The execution time of the schedule is also called $C_{max}$ or schedule length in the scheduling community. \ No newline at end of file diff --git a/samples/texts/4011427/page_12.md b/samples/texts/4011427/page_12.md new file mode 100644 index 0000000000000000000000000000000000000000..170f8ed2031c1558b1386510b7843a93db71dccd --- /dev/null +++ b/samples/texts/4011427/page_12.md @@ -0,0 +1,18 @@ +ERPOT: A quad-criteria +scheduling heuristic to +optimize the execution +time, failure rate, power +consumption and +temperature in +multicores + +ABDI Athena, GIRAULT Alain, ZARANDI Hamid + +RESEARCH +REPORT + +Nº 9196-v2 + +March 2019 + +Project-Teams Spades \ No newline at end of file diff --git a/samples/texts/4011427/page_13.md b/samples/texts/4011427/page_13.md new file mode 100644 index 0000000000000000000000000000000000000000..d2c89c824b3e3a6e5d0977af6b044f6017ea4d57 --- /dev/null +++ b/samples/texts/4011427/page_13.md @@ -0,0 +1,13 @@ +paths from *S* to *D* makes the computation of the reliability very complex. Concerning hardware faults, this hypothesis is reasonable, but this would not be the case for software faults [34]. + +In general, the structure of the RBD is unspecified, which makes the reliability computation NP-complete [35]. Following [5], the solution we use to prevent this is to insert routing tasks (the execution time of which is 0) from each set of replicas of a predecessor task to the set of replicas of its successor task. As a result, the RBD is *serial-parallel*, which makes the reliability computation linear. For any task $\tau$ of *Alg*, all its replicas appear in parallel in the same block of the RBD, whose reliability is therefore computed by Eq. (3), and the RBD is composed of all these blocks in sequence (hence the serial-parallel structure). Consider for instance a simple DAG with two tasks $X \to Y$ to be scheduled onto a six-core chip with a single bus. If X is replicated twice on cores $c_1$ and $c_2$ (its two replicas being denoted $X^1$ and $X^2$), and Y is replicated twice on cores $c_4$ and $c_5$ (its two replicas being denoted $Y^1$ and $Y^2$), then the RBD for this schedule will have the form shown in Fig. 5. + +Figure 5: Reliability Block Diagram for a simple schedule with replication. + +In practice, our scheduling heuristics will optimize the placement of the routing tasks so as to minimize the total execution time, for instance by mapping *R* to $c_1$ or $c_2$. + +Owing to the serial-parallel structure of the RBD, computing the reliability of a schedule (be it partial of final) is compositional. It follows that, to guarantee that the entire schedule satisfies the $\Lambda_{obj}$ constraint, it suffices to guarantee that each block of the RBD satisfies this constraint. + +In order to keep the power consumption below $P_{obj}$, we use two techniques: (i) on the one hand DVFS, which is available on many modern multicores such as the Intel i7-2600 quad-core or the Samsung Exynos 5422 octa-core; this allows us to lower the $P_{dyn}$ term of Eq. (11); and (ii) on the other hand we try to keep the temperature below $T_{obj}$, which allows us to lower the $P_{leak}$ term of Eq. (11). Computing the dynamic power consumption requires computing the energy consumed by the schedule (be it partial or local), and then to divide it by the schedule length. The compositionality issue raised by the GSFR computation also arises here. As demonstrated in [2], this issue can be solved by over-estimating the energy consumption each time that the partial schedule has a “hole” at the end, that is, each time one of the cores is idle while the other cores are busy executing their last task. Over-estimation is achieved by computing the energy consumed by such a schedule as if the “hole” was “filled” with a virtual task running at the maximal frequency. + +In order to keep the temperature below $T_{obj}$, we insert cooling times to allow the cores to cool down [36, 8, 37] (the buses are always much less loaded than the cores, so they never need to cool down). We follow the same principle as the JUST strategy proposed in [8] for single-core processors, with two differences: first, the target architecture is a multicore, and second, our objective is to minimize the schedule length under a maximal temperature constraint. The rationale of the JUST strategy is to insert cooling times as late as possible and only when needed, i.e., just in time. Thus, each time we want to schedule a task $\tau$ on a core *c*, we evaluate the temperature of each core in the multicore at the end of this task, taking into account the planned \ No newline at end of file diff --git a/samples/texts/4011427/page_14.md b/samples/texts/4011427/page_14.md new file mode 100644 index 0000000000000000000000000000000000000000..1d8684442b9f7798cc8a4af88249c8e64ddd10cf --- /dev/null +++ b/samples/texts/4011427/page_14.md @@ -0,0 +1,21 @@ +voltage and frequency of $\tau$ and the influence of the temperature of the neighbors of $c$. If it exceeds $T_{obj}$, then we postpone the starting time of $\tau$ by inserting a *cooling time* in order to cool down the core $c$. The length of the cooling time is the *smallest length* such that the temperature at the end of $\tau$ does not exceed $T_{obj}$. + +Recall that a high temperature has a negative effect on the reliability (as shown in Eq. (6)) as well as on the leakage power consumption (see Eq. (11)). This makes it all the more important to limit the maximal temperature. + +## 4.2 Quad-criteria scheduling heuristic algorithm + +ERPOT is a ready list scheduling algorithm implemented in MATLAB (1,300 lines of code). It works with two lists, the list *Ready*(n) of ready tasks and the list *Sched*(n) of scheduled tasks, where (n) denotes the current step of the list scheduling. At each step (n), we have *Ready*(n) ∩ *Sched*(n) = ∅. + +In a preliminary phase, we traverse the *Alg* graph breadth-first, from the output tasks to the input tasks, in order to compute, for each task τ, the *Longest Execution Path* from τ to the end of the graph, noted *LEP*(τ). This notion is similar to the “bottom-level” presented in [38]. Intuitively, *LEP*(τ) accounts for all the “future” tasks of τ. For each task τ, it is computed as follows: + +* If $succ(\tau) = \emptyset$, then we compute its LEP as $LEP(\tau) = (\sum_{c \in C} \mathcal{E}xe_{nom}(\tau, c)) / |C|$. The nominal execution time of $\tau$ is averaged over all the cores (the set $C$) since we do not know in advance onto which core $\tau$ will be actually scheduled. + +* If $succ(\tau) = \{\tau'\}$, then $LEP(\tau) = LEP(\tau') + (\sum_{c \in C} \mathcal{E}xe_{nom}(\tau, c)) / |C|$. Since $\tau$ has only one successor, its nominal execution time is added to the LEP of its only successor (again, averaged over all cores). + +* If $succ(\tau) = \{\tau_i\}_{1 \le i \le k}$ with $k \ge 2$, then $LEP(\tau) = \max_{1 \le i \le k} LEP(\tau_i) + (\sum_{c \in C} \mathcal{E}xe_{nom}(\tau, c)) / |C|$. Since $\tau$ has more than one successor, its averaged nominal execution time is added to the max of the LEPs of all its successors (again, averaged over all cores). + +Still in the preliminary phase, we build the set $2^C$ of all subsets of $C$, and for each such subset $\{c_i\}_{1 \le i \le k} \in 2^C$, we build all the possible sets of pairs $\{(c_i, f_j)\}_{1 \le i \le k, 1 \le j \le l}$, where $l$ is the number of available frequencies. We denote by $Q$ the set of all such sets of pairs (core,frequency). + +In the main phase of ERPOT, we first assign to *Ready*(0) the set of input tasks of $V$, and to *Sched*(0) the empty set. Then, at each step (n), we select the *most urgent* task to be scheduled among all the ready tasks, that is, the task $\tau_{urg}$ for which *LEP*($\tau$) is the largest: $\tau_{urg} = \underset{\tau \in \mathcal{R}eady^{(n)}}{\operatorname{argmax}} LEP(\tau)$. + +The next step involves selecting the best subset of cores and their associated frequencies to execute $\tau_{urg}$. Each $Q_i \in Q$ is a potential scheduling choice for $\tau_{urg}$, which we need to evaluate according to our three constraints and our minimization criterion. We denote by $Q_{best}$ the best scheduling choice, by $L^{(n)}$ the schedule length at step (n), thus before executing $\tau$ on $Q_{best}$, and by $L^{(n+1)}(\tau, Q_{best})$ the schedule length after executing $\tau$ on $Q_{best}$, which we shorten into $L^{(n+1)}$ to avoid heavy notations. Similarly, we denote by $\Lambda^{(n)}$ the GSFR, $E^{(n)}$ the energy, and $T^{(n)}$ the temperature at step (n), again shortened. We further note $\Lambda(\tau, Q_{best})$ the GSFR of the parallel block corresponding to executing $\tau$ onto each core of $Q_{best}$. Recall that we have explained in Section 4.1 that the GSFR of a schedule is computed block by block, as a result of the serial-parallel structure of its RBD. With these notations, $Q_{best}$ is given by the following \ No newline at end of file diff --git a/samples/texts/4011427/page_15.md b/samples/texts/4011427/page_15.md new file mode 100644 index 0000000000000000000000000000000000000000..a555a9e4c4f493c73e77f1b8b7ac163bb8134daf --- /dev/null +++ b/samples/texts/4011427/page_15.md @@ -0,0 +1,37 @@ +equation: + +$$ +\begin{equation} +\begin{aligned} +Q_{best} = \operatorname*{argmin}_{Q_i \subseteq \mathcal{Q}} & \{ L^{(n+1)} | \Lambda(\tau, Q_i) \le \Lambda_{obj} \\ +& \wedge (E^{(n+1)} - E^{(n)}) \le P_{obj} \cdot (L^{(n+1)} - L^{(n)}) \\ +& \wedge T^{(n+1)} \le T_{obj} \} +\end{aligned} +\tag{19} +\end{equation} +$$ + +Eq. (19) might return an empty set $Q_{best}$. This can occur for three reasons: + +1. Either there is no subset of cores $Q_i$ that satisfies the GSFR criterion $\Lambda(\tau, Q_i) \le \Lambda_{obj}$. In other words, the number of available cores is not sufficient to reach the required GSFR level. The heuristic fails and returns a “no solution” result. Recall that we want to find solutions in the 4D space (execution time, GSFR, power, temperature). So “no solution” only means that there will be no Pareto point at the coordinates $(\Lambda_{obj}, P_{obj}, T_{obj})$ in the 4D space. + +2. Either there is no subset of cores $Q_i$ that satisfies the power consumption criterion ($E^{(n+1)} - E^{(n)} \le P_{obj} \cdot (L^{(n+1)} - L^{(n)})$. In other words, the available frequencies are not sufficient to reach the required power consumption level. Like in case 1 above, the heuristic fails and returns a “no solution” result. + +3. Or there is no subset of cores $Q_i$ that satisfies the temperature criterion $T^{(n+1)} \le T_{obj}$. In this case, let $Q'_i = \{c_j \in Q_i | T^{(n+1)}(c_j) > T_{obj}\}$ and let $t_j$ be earliest time at which $\tau$ can start on core $c_j$. We add to each core $c_j \in Q'_i$ a cooling time of length $s_j$ that starts at $t_j$, such that $s_j$ is the smallest integer satisfying the inequality: + +$$ +T_{c_j}^{\text{cool}}(t_j, s_j) + T_{c_j}^{\text{heat}}(t_j + s_j, \mathcal{E}xe(\tau, c_j, f_j)) \leq T_{\text{obj}} +$$ + +**4.3 Soundness of our scheduling heuristic** + +We prove in this section four key propositions on the produced schedules, which guarantee that +the schedules generated by ERPOT satisfy the $\Lambda_{obj}$, $P_{obj}$, and $T_{obj}$ constraints. + +**Proposition 3** Let S be a schedule of Alg onto Arc. If each task of Alg has been scheduled on the subset of cores $Q_{best}$ defined by Eq. (19), thus satisfying the GSFR constraint $\Lambda_{obj}$, then the total schedule S will also meet the $\Lambda_{obj}$ constraint. + +*Proof (see [5]):* Each task $\tau_i$ of Alg is scheduled onto a subset $Q_{best}^i$ that was selected by Eq. (19). Hence, for all $\tau_i$ in Alg, we have $\Lambda(\tau_i, Q_{best}^i) \le \Lambda_{obj}$. Owing to the serial-parallel structure of the RBD corresponding to the schedule S and to the fact that the GSFR is computed compositionally from the RBD, it follows that $\Lambda(S) \le \Lambda_{obj}$. $\square$ + +**Proposition 4** Let S be a schedule of Alg onto Arc. If each task of Alg has been scheduled on the subset of cores $Q_{best}$ defined by Eq. (19), thus satisfying the power consumption constraint $P_{obj}$, then the total schedule S will also meet the $P_{obj}$ constraint. + +*Proof (see [2]):* The proof follows from Eq. (19) and from the compositionality of the power consumption (as opposed to the energy). Notice that the constraint on the power consumption in Eq. (19) is actually expressed as a constraint between the *energy increase* ($E^{(n+1)} - E^{(n)}$) and the *schedule length increase* ($L^{(n+1)} - L^{(n)}$). The reason is the following: suppose that, at \ No newline at end of file diff --git a/samples/texts/4011427/page_16.md b/samples/texts/4011427/page_16.md new file mode 100644 index 0000000000000000000000000000000000000000..0d4cfcd04b944606a0247bdc73861108b90925a7 --- /dev/null +++ b/samples/texts/4011427/page_16.md @@ -0,0 +1,27 @@ +step (n), the most urgent task is $\tau_i$ with $Q_{best}^i = \{(c_i, f_i)\}$; suppose also that, in the partial schedule before mapping $\tau_i$, the finish time $L_{c_i}$ on core $c_i$ is such that $L_{c_i} + \mathcal{E}xe(\tau_i, c_i, f_i) \le L^{(n)}$; in other words, scheduling $\tau_i$ on $c_i$ at frequency $f_i$ does not increase the current schedule length because there is a “hole” at the end of the schedule of core $c_i$. Hence $L^{(n)} = L^{(n+1)}$. In contrast, the energy does increase when $\tau_i$ is scheduled on $c_i$ at frequency $f_i$, so $E^{(n+1)} > E^{(n)}$. To overcome this issue, we have proposed in [2] a solution where we “fill” each “hole” at the end of the schedule with a virtual task executing at the maximal frequency $f_{max}$. It follows the energy consumed by the partial schedule at each step (n) is over-estimated. + +With this over-estimation, we prove the desired property by induction on (n). At step (1), the property is verified because the first task is scheduled according to Eq. (19): + +$$ (E^{(1)} - E^{(0)}) \le P_{obj} \cdot (L^{(1)} - L^{(0)}) \iff P^{(1)} = \frac{E^{(1)}}{L^{(1)}} \le P_{obj} $$ + +Then, our induction hypothesis is: + +$$ P^{(n)} = \frac{E^{(n)}}{L^{(n)}} \le P_{obj} \iff E^{(n)} \le P_{obj} L^{(n)} \quad (20) $$ + +As a result of Eq. (19), we have: + +$$ \begin{align*} (E^{(n+1)} - E^{(n)}) &\le P_{obj} \cdot (L^{(n+1)} - L^{(n)}) \\ \iff & E^{(n+1)} \le E^{(n)} + P_{obj} L^{(n+1)} - P_{obj} L^{(n)} \end{align*} $$ + +Owing to the induction hypothesis (20), this implies: + +$$ \begin{align*} E^{(n+1)} &\le P_{obj} L^{(n)} + P_{obj} L^{(n+1)} - P_{obj} L^{(n)} \\ \iff & E^{(n+1)} \le P_{obj} L^{(n+1)} \\ \iff & P^{(n+1)} = \frac{E^{(n+1)}}{L^{(n+1)}} \le P_{obj} \end{align*} $$ + +which concludes the proof by induction. $\square$ + +**Proposition 5** Let S be a schedule of Alg onto Arc with initial temperature $T_{init}$. If each task of Alg has been scheduled on the subset of cores $Q_{best}$ defined by Eq. (19), thus satisfying the temperature constraint $T_{obj}$, and if $T_{init} \le T_{obj}$, then the maximum temperature reached during one execution of S starting at $T_{init}$ will also meet the $T_{obj}$ constraint. + +**Proof:** By hypothesis, $T^{(0)} = T_{init} \le T_{obj}$. Then, the maximum temperature during S is equal to $\max_{1\le i\le n} T^{(n)}$. Since each scheduling decision satisfies Eq. (19), it follows that $\forall 1 \le i \le n$, $T^{(n)} \le T_{obj}$. As a conclusion we have $\max_{1\le i\le n} T^{(n)} \le T_{obj}$. $\square$ + +## 4.4 Dealing with reactive systems + +Propositions 3 and 4 are valid when the schedule is executed once, but also when the schedule is executed repeatedly and infinitely, as is the case for *reactive systems*. What characterizes a reactive system is that it controls some physical device (e.g., a satellite) and that it must continue to do so during the entire life of this physical device. Proposition 5 is valid when the schedule S is executed once, but not when it is repeated infinitely. The reason is due to the difference between the initial temperature $T_{init}$ when the schedule starts and the final temperature $T_f$ when the schedule ends (and also to the fact that the temperature curve depends on the initial temperature, as opposed to the GSFR and the power). Two cases arise: \ No newline at end of file diff --git a/samples/texts/4011427/page_17.md b/samples/texts/4011427/page_17.md new file mode 100644 index 0000000000000000000000000000000000000000..0a137589f7d1316c6b07123078e4d70d287d505d --- /dev/null +++ b/samples/texts/4011427/page_17.md @@ -0,0 +1,35 @@ +1. If $T_{init} < T_f \leq T_{obj}$, then executing a second time the same schedule will inevitably increase further the temperature, so after some bounded number of executions of this schedule, the multicore temperature will violate the $T_{obj}$ constraint. Recall that the cooling times are static and have been inserted in the schedule based on $T_{init}$. This is not safe. + +2. If $T_f < T_{init} \leq T_{obj}$, then executing a second time the same schedule will inevitably decrease further the temperature, so after a large number of executions of this schedule, the multicore temperature will drop to the ambient temperature. This is not optimal. + +Therefore, in order to be safe regarding the $T_{obj}$ constraint and to be optimal, we should guarantee that $T_f = T_{init}$, which can only be achieved by being in Case 1 and then inserting on each core a cooling time until the average temperature of the multicore is equal to $T_{init}$ (because we can cool down the multicore after executing the schedule by inserting a cooling time, while we cannot heat it). Proposition 6 generalizes Proposition 5 to the case of a schedule executed repeatedly. + +**Proposition 6** Let S be a schedule of Alg onto Arc with initial temperature $T_{init}$, final temperature $T_f$, and execution time $C_{max}$. If each task of Alg has been scheduled on the subset of cores $Q_{best}$ defined by Eq. (19) (thus satisfying the temperature constraint $T_{obj}$), if $T_{init} \leq T_f \leq T_{obj}$, and if we insert a cooling time of size $\delta$ at the end of S such that $T^{\text{cool}}(C_{max}, \delta) = T_{init}$, then the maximum temperature reached during an arbitrary number of executions of S starting at $T_{init}$ will also meet the $T_{obj}$ constraint. + +**Proof:** We prove this property by induction on the number *m* of executions of *S*. Let *MaxTemp*(*k*, *S*) denote the maximal temperature during the *k*-th execution of *S*. + +The case $MaxTemp(1, S) \leq T_{obj}$ is proved by Proposition 5. This first execution of *S* is followed by a cooling time of size $\delta$, hence $T(C_{max} + \delta) = T_{init}$, which is the start time of the second execution of *S*. + +The induction hypothesis is then: + +$$ \max_{1 \leq k \leq m} MaxTemp(k, S) \leq T_{obj} \quad (21) $$ + +The *m*-th execution of *S* is followed by a cooling time of size *δ*, hence $T(m \cdot (C_{max}+\delta)) = T_{init}$, +which is the start time of the *m+1*-th execution of *S*. Applying the reasoning for $MaxTemp(1, S)$ +to the *m+1*-th execution yields $MaxTemp(m+1, S) \leq T_{obj}$. By the induction hypothesis, the +proof is then concluded. $\square$ + +The size $\delta$ of the cooling time depends on the difference between $T_f$ and $T_{init}$. It is obtained +by solving for $\delta$ the equation $T^{\text{cool}}(C_{max}, \delta) = T_{init}$: + +$$ +\begin{align*} +& \frac{B'}{A} + \left(T_f - \frac{B'}{A}\right) \cdot e^{-A(\delta - C_{max})} = T_{init} \\ +\Longleftrightarrow & e^{-A(\delta - C_{max})} = \frac{T_{init} - B'/A}{T_f - B'/A} \\ +\Longleftrightarrow & \delta = C_{max} - \frac{1}{A} \cdot \log\left(\frac{T_{init} - B'/A}{T_f - B'/A}\right) +\end{align*} +$$ + +Since $\delta$ must be an integer number, we round it up: + +$$ \delta = \left[ C_{max} - \frac{1}{A} \cdot \log \left( \frac{T_{init} - B'/A}{T_f - B'/A} \right) \right] \quad (22) $$ \ No newline at end of file diff --git a/samples/texts/4011427/page_18.md b/samples/texts/4011427/page_18.md new file mode 100644 index 0000000000000000000000000000000000000000..d2edff3661e1b3551de75f39545bfb734667a210 --- /dev/null +++ b/samples/texts/4011427/page_18.md @@ -0,0 +1,13 @@ +Finally, reactive systems must comply to hard deadlines. We do not directly address this when we generate the Pareto fronts. Once the Pareto front is computed, the user can eliminate all the points that fail to meet his or her hard deadline, and then choose one solution among the remaining ones by considering the other criteria. + +## 4.5 Taking into account the temperature of the adjacent cores + +In a multicore, multiple cores are located on a single chip at a very short distance from each other, so the temperature of each core impacts the other cores. This is taken into account by Eqs. (14) and (15). + +Now, one situation that can arise during our list scheduling algorithm is when the current task $\tau^{(n)}$ is scheduled at step $(n)$ on some core $c$ such that $c$'s neighbors are (partly) idle during the duration of $\tau^{(n)}$. This is illustrated in Fig. 6(a) where task $\tau^{(n)}$ is scheduled on $c_2$. The risk is that the temperature computed at the end of $\tau^{(n)}$ is under-estimated because the tasks that will be scheduled on the neighbors of $c_2$ (i.e., $c_1$ and $c_3$ in Fig. 6(a)) in a *future* step of the heuristic will not be accounted for. For instance, Fig. 6(b) illustrates the case of a task $\tau^{(n+1)}$ that is scheduled on $c_1$ at step $(n+1)$, causing an increase of the temperature on $c_2$ that was not taken into account when we scheduled $\tau^{(n)}$ on $c_2$. + +Figure 6: (a) Partial schedule at step (n) and (b) at step (n + 1). A white box represents some new task τ such that its vertical length is proportional to $E_{xe}(\tau, c, f)$. A gray box represents an arbitrary sequence of tasks scheduled during the previous steps. + +Figure 7: Temperature over-estimation by adding virtual task to each of the neighbor of $c_2$ because their respective finish time at step (n - 1) was strictly less than $L_{c2}^{(n)}$. + +We solve this issue by adding *virtual tasks* on all the neighbors to *over-estimate* the temperature: each time a task $\tau^{(n)}$ is scheduled on some core $c$, for each neighbor $c'$ of $c$ such that the \ No newline at end of file diff --git a/samples/texts/4011427/page_19.md b/samples/texts/4011427/page_19.md new file mode 100644 index 0000000000000000000000000000000000000000..0aafa5bd8498af814194d8e2004e09d8fb7e43f9 --- /dev/null +++ b/samples/texts/4011427/page_19.md @@ -0,0 +1,88 @@ +current finish time on $c'$ is strictly less than the finish time on $c$ (denoted $L_c^{(n)}$ — note that it +can be less than $L^{(n)}$), we add on $c'$ a virtual task that finishes exactly at $L_c^{(n)}$ and that runs at +frequency $f_{max}$. These virtual tasks modify the value of $T_c'$ in Eq. (15), therefore guaranteeing +that, whatever the future scheduling decisions, the runtime temperature on core $c$ at time $L^{(n)}$ +will actually be below the temperature computed during the step $(n)$ of our heuristic. This is +illustrated in Fig 7. Of course, when actual tasks are scheduled on these cores $c'$ during future +steps, the virtual tasks are removed and the temperature is recomputed accordingly. + +Table 1 summarizes the main computations used in ERPOT. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ Execution time $\mathcal{E}xe(\tau, c, f) = [\mathcal{E}xe_{nom}(\tau, c)/f]$ +
+ Failure rate $\lambda_{sys} = \lambda_0 \cdot 10^{\frac{b(1-f)}{1-f_{min}}} \cdot e^{-\frac{E_a}{K}\left(\frac{1}{T(t)} - \frac{1}{T_0}\right)}$ +
+ Reliability $R(\tau, \mathcal{K}, t) = 1 - \left( \prod_{i=1}^{k} \left(1 - e^{-\lambda_{sys}(c_i) \cdot \mathcal{E}xe(\tau, c_i, f_{c_i})}\right) \right)$
+ (computed with Reliability Block Diagrams) +
+ GSFR $\Lambda(S) = -\log(R(S))/U(S)$ +
+ Utilization $U(S) = \sum_{(\tau,c,f) \in S} \mathcal{E}xe(\tau,c,f)$ +
+ Power $P_{sys}(t) = \underbrace{\alpha \cdot T(t) + \beta_h}_{leakage} + \gamma \cdot C_{ef} \cdot V^2 \cdot f$
+ dynamic +
+ Temperature differential equation
+ $C \cdot \left( \frac{dT_c(t)}{dt} \right) + G(T_c(t) - T_{amb}) + 2D\_heat = P(t)$ +
+ Heat transfer from neighbor cores
+ $2D\_heat = \sum_{c' \in nbr(c)} \kappa(c, c') \cdot (T_c(t) - T_{c'}(t))$ +
+ Solution $T_c(t) = T_{\infty}^{heat} + (T_0 - T_{\infty}^{heat}) \cdot e^{-A(t-t_0)}$ +
+ Steady state temperature $T_{\infty}^{heat} = B/A$ +
+ +Table 1: Summary of all the computations. + +## 4.6 Integer Linear Program + +We now propose an ILP formulation of our scheduling problem, with the purpose of comparison +with the heuristic algorithm presented in Section 4.2. The models and the assumptions used in +Section 4.2 are also used here for the ILP program. The decision variables are the following: + +$$ +\begin{align*} +S_{ik} &\in \mathbb{N}: && \text{start time of replica } k \text{ of task } i \\ +F_{ik} &\in \mathbb{N}: && \text{finish time of replica } k \text{ of task } i \\ +Sb_{ik} &\in \mathbb{N}: && \text{start time of replica } k \text{ of data dependency } i \\ +Fb_{ik} &\in \mathbb{N}: && \text{finish time of replica } k \text{ of data dependency } i \\ +W &\in \mathbb{N}: && \text{total execution time of the application} +\end{align*} +$$ + +$$ +x_{ikc} = \begin{cases} 1 & \text{if replica } k \text{ of task } i \text{ is assigned to core } c \\ 0 & \text{otherwise} \end{cases} +$$ \ No newline at end of file diff --git a/samples/texts/4011427/page_2.md b/samples/texts/4011427/page_2.md new file mode 100644 index 0000000000000000000000000000000000000000..348d9ccd27927cdeee6a1f3988eac76773ed3c37 --- /dev/null +++ b/samples/texts/4011427/page_2.md @@ -0,0 +1,9 @@ +Figure 1: Two transformation methods to compute the Pareto front (2D case): (a) $\varepsilon$-constraint method; (b) aggregation method. + +Building the whole Pareto front and considering all constraints in a multi-criteria problem is a complicated task. To do this, several approaches exist [16], including the *aggregation* method that combines all the criteria in a single cost function, the *hierarchization* method that optimizes one criteria at a time, and the *transformation* method that transforms all the criteria except one into thresholds, and optimizes the remaining criterion under the constraints that the thresholds are satisfied (this last method is also called “budget optimization”). It is also possible to use population based methods, (e.g., genetic algorithms, particle swarm, ant colony, ...) or the Normal-Boundary Intersection method (NBI) [17]. + +Varying the cost function in the aggregation method or varying the order of the criteria in the hierarchization method can lead to computing several Pareto points, but not the entire Pareto front, a major theoretical drawback. The aggregation method is illustrated in Fig. 1(b) where the aggregation function is $f(Z_1, Z_2) = \alpha_1Z_1 + \alpha_2Z_2$. For two given values of $\alpha_1$ and $\alpha_2$, the Pareto point that is found is the one that minimizes $f$: geometrically, it is the point from the Pareto front intersecting the line of slope $-\alpha_1/\alpha_2$ and having the smallest value at origin (the $p$ in Fig. 1(b)). The problem is that the concave portions of the Pareto front will be missed, e.g., the $x_4$ point in Fig. 1(b); more generally, this is always the case if the aggregation function is convex (which is the case of $f$). However, if the aggregation function is not convex, then there is no guarantee that the computed points are on the Pareto front. For instance, a non-convex aggregation function could return the point $y_1$. + +Overall, the transformation method is an effective method to build the entire Pareto front when used in an iterative way. With two criteria, this is known as the $\varepsilon$-Constraint Method ($\varepsilon$CM) [12], depicted in Fig. 1(a). The criterion $Z_1$ is transformed into a constraint. At iteration 1, the threshold for $Z_1$ is set to $K_1^1 = +\infty$, yielding the Pareto optimum $x_1$. At iteration 2, the threshold $K_1^2$ is set to the horizontal coordinate of $x_1$, therefore excluding the portion of the plane that is emphasized (in pink) and yielding the Pareto optimum $x_2$. This process repeats until all the points of the Pareto front have been found (if there is finite number of them), or until some pre-decided number of Pareto point have been found. Under the two conditions that (i) the number of Pareto optima is *finite* and that (ii) the minimization algorithm for $Z_2$ computes the *optimal* result, $\varepsilon$CM computes the entire optimal Pareto front. + +$\varepsilon$CM has been later generalized to more than two criteria in [13], but at a very high computational cost: $k^{m-1}\mathcal{O}(\text{opt})$, where $k$ is the number of points in the Pareto front, $m$ is the number of criteria, and $\mathcal{O}(\text{opt})$ is the complexity of the single criterion optimization algorithm. This computational complexity makes the generalized $\varepsilon$CM unfeasible for our problem (if only for the \ No newline at end of file diff --git a/samples/texts/4011427/page_20.md b/samples/texts/4011427/page_20.md new file mode 100644 index 0000000000000000000000000000000000000000..a6811e3988bc5709a0df5fc8db827813c8f6e1e2 --- /dev/null +++ b/samples/texts/4011427/page_20.md @@ -0,0 +1,65 @@ +$$ +\begin{align*} +x_{ikcfs} &= \begin{cases} 1 & \text{if replica } k \text{ of task } i \text{ is assigned to core } c \text{ at frequency } f \text{ and after a cooling time of } s \text{ time units} \\ 0 & \text{otherwise} \end{cases} \\ +\sigma_{ijkk'} &= \begin{cases} 1 & \text{if replica } k \text{ of task } i \text{ starts before replica } k' \\ 0 & \text{of task } j \\ 0 & \text{otherwise} \end{cases} \\ +Y_{iK} &= \begin{cases} 1 & \text{if task } i \text{ is replicated } K \text{ times} \\ 0 & \text{otherwise} \end{cases} \\ +B_{ik} &= \begin{cases} 1 & \text{if replica } k \text{ of task } i \text{ has an outgoing data dependency} \\ 0 & \text{otherwise} \end{cases} +\end{align*} +$$ + +The main objective of our optimization problem is minimizing the total execution time. Then, two kinds of ILP constraints must be formulated. The first kind are the constraints that guarantee the schedulability: + +1. Every replica *k* of task *i* should be assigned to exactly one core *c*: + +$$ +\forall i, \forall k, \sum_c x_{ikcfs} = 1 \tag{23} +$$ + +2. Every replica *k* of task *i* on core *c* should be assigned to exactly one level of frequency and be preceded by exactly one cooling time (possibly of size 0): + +$$ +\forall i, \forall k, \forall c, \sum_{f,s} x_{ikcfs} = x_{ikc} \quad (24) +$$ + +3. The finish time of every replica *k* of task *i* should be less or equal than the total execution time: + +$$ +\forall i, \forall k, F_{ik} \le W \tag{25} +$$ + +4. The finish time of every replica *k* of task *i* is computed based on its execution time and its start time: + +$$ +\forall i, \forall k, F_{ik} = S_{ik} + \sum_{c,f,s} x_{ikcfs} \cdot exe_c(i,c,f,s) + Fb_{ik} \quad (26) +$$ + +where, $exe_c(i, c, f, s)$ is the execution time of task $i$ on core $c$ at the $f$-th frequency level and after a cooling time of size $s$: $exe_c(i, c, f, s) = Eexe(i, c, f) + s$. + +5. Tasks can not overlap and must obey their precedence order (*M* is a constant greater than the largest existing number in the ILP program — “big *M* method” [39]): + +$$ +\forall i \neq j, \forall k, \forall k', \sigma_{ijkk'} + \sigma_{jik'k} \leq 1 \tag{27} +$$ + +$$ +\forall i, \forall j, \forall k, \forall k', S_{ik} \leq S_{jk'} + (1 - \sigma_{ijkk'}) \cdot M \quad (28) +$$ + +$$ +\begin{equation} +\begin{split} +&\forall i, \forall j, \forall k, \forall k', \forall c, F'_{ik} \le (2 - x_{ikc} - x_{jk'c}) \cdot M \\ +&\phantom{\forall i, \forall j, \forall k, \forall k', \forall c, F'_{ik}} + S_{jk'} + (1 - \sigma_{ijkk'}) \cdot M +\end{split} +\tag{29} +\end{equation} +$$ + +$$ +\forall i \in pred(j), \forall k, \forall k', F_{ik} \leq S_{jk'} \quad (30) +$$ + +$$ +\forall i \in pred(j), \forall k, \forall k', \sigma_{ijkk'} = 1 +\quad (31) +$$ \ No newline at end of file diff --git a/samples/texts/4011427/page_21.md b/samples/texts/4011427/page_21.md new file mode 100644 index 0000000000000000000000000000000000000000..a27951d34d1d8fd5712bd13bafcaeadaaa836d78 --- /dev/null +++ b/samples/texts/4011427/page_21.md @@ -0,0 +1,69 @@ +6. If task *j* is a successor of *i* and both are assigned to different cores, then this data dependency must be transmitted on the bus: + +$$ +\forall i, \forall k, \forall j \in \text{pred}(i), \forall k', \forall c' \neq c, +$$ + +$$ +B_{ik} = \bigvee_{c} \left( x_{ikc} \wedge \left( \bigvee_{j,k',c'} x_{jk'c'} \right) \right) \quad (32) +$$ + +where the logical operators `^` and `^` are linearized [39]. + +7. The start time of data dependency *i* is computed based on the first idle time of the bus and on the previous data dependencies transmitted on the bus: + +$$ +\forall i, \forall k, \forall b, Sb_{ik} = \sum_{j,k'} ((\sigma_{jik'k} \land B_{ik} \land B_{jk'}) \cdot exe_b(j,b)) \quad (33) +$$ + +where `exe_b(j,b)` is the transmission time of data-dependency `j` on bus `b`: `exe_b(j,b) = E[xe(j,b, f_b)]` (recall that buses operate at the fixed frequency `f_b`, and that we do not insert cooling times on the buses). + +8. The finish time of each data dependency is the sum of its start time and its transmission time: + +$$ +\forall i, \forall b, \forall k, F_{bik} = S_{bik} + B_{ik} \cdot exe_b(i, b) \quad (34) +$$ + +9. Data dependencies must be serialized on the bus: + +$$ +\begin{array}{l} +\forall i, \forall k, \forall j \ge i, \forall k', \forall b, \\ +Sb_{ik} \le Sb_{jk'} - exe_b(i, b) + (1 - B_{ik} + \sigma_{ijkk'}) \cdot M +\end{array} +\tag{35} +$$ + +The second kind are the ILP constraints that guarantee that the GSFR / power consumption / temperature remain below $\Lambda_{obj}/P_{obj}/T_{obj}$: + +1. The GSFR must be less than or equal to $\Lambda_{obj}$: + +$$ +\forall i, \sum_k Y_{ik} = 1 +\quad (36) +$$ + +$$ +\forall i, \forall c, \sum_k x_{ikc} \le 1 +\quad (37) +$$ + +$$ +\forall i, \sum_{k,c} x_{ikc} = \sum_k k \cdot Y_{iK} \quad (38) +$$ + +$$ +\forall i, \sum_{k,c,f,s} x_{ikcfs} \cdot \text{GSFR}(c, f, s) \\ +\qquad + \sum_{k,b} B_{ik} \cdot \text{GSFR}(b, f_b, 0) \leq \Lambda_{obj} +\quad (39) +$$ + +2. The power consumption must be less than $P_{obj}$: + +$$ +\sum_{i,k,c,f,s} x_{iklmc} \cdot exe_c(i,c,f,s) \cdot P(f,s) \\ ++ \sum_{i,k,b} B_{ik} \cdot P(f_b, 0) \cdot exe_b(i,b) \leq P_{obj} \cdot W +\quad (40) +$$ + +where $P(f,s)$ is the sum of leakage and dynamic power consumption when the task runs at frequency $f$ and is preceded by a cooling time of size $s$. \ No newline at end of file diff --git a/samples/texts/4011427/page_22.md b/samples/texts/4011427/page_22.md new file mode 100644 index 0000000000000000000000000000000000000000..8fcda9d9a21c2b0e838ce56dbc8c6e3e0ccdb3a8 --- /dev/null +++ b/samples/texts/4011427/page_22.md @@ -0,0 +1,29 @@ +3. The temperature on each hardware component (cores and bus) must be less than or equal to $T_{obj}$: + +$$ +\begin{aligned} +& \forall i, \forall k, \log(T_{\infty}^{\text{heat}} - T_0) - a \cdot F_{ik} + C \cdot M \ge \\ +& \qquad \log(T_{\infty}^{\text{heat}} - T_{obj}) +\end{aligned} +\tag{41} $$ + +$$ +\begin{aligned} +& \forall i, \forall k, \log(T_{\infty}^{\text{cool}} - T_0) - a \cdot F_{ik} \le \\ +& \qquad \log(T_{obj} - T_{\infty}^{\text{cool}}) + (1-C) \cdot M +\end{aligned} +\tag{42} $$ + +where $T_0$, $T_\infty^{\text{heat}}$, and $T_\infty^{\text{cool}}$ represent respectively the initial temperature at $t_0$, the heating steady state temperature, and the cooling steady state temperature. Eqs. (41) and (42) are for the cores; for the bus it suffices to replace $F_{ik}$ by $Fb_{ik}$ and to take the value of parameter $a$ corresponding to the bus. + +Based on these equations, the main objective of ILP is to minimize the total execution length (the *W* variable in our ILP formulation), under the constraints specified by Eqs (23) to (42). In Section 5.4, we will compare the Pareto fronts computed respectively by our quad-criteria heuristic ERPOT and by an ILP program. + +# 5 Simulation results + +We ran several kinds of experiments to evaluate our ERPOT heuristic. In Section 5.1, we assess the influence of the *temperature*, *power consumption*, and *reliability* constraints on the *execution time*. In Section 5.2, we show a whole Pareto front for a given problem instance. In Section 5.3, we compare ERPOT with the PowerPerf-PET scheduling heuristic from [4]. Finally, in Section 5.4, we compare ERPOT with the ILP program of Section 4.6. + +The target multicore chip is shown in Fig. 3(b) and the parameter values are provided in Table 2, taken in part from [8] and [7]. + +
λ0 = 10-5, C = 0.03 JK-1, G = 0.3 WK-1, βh = -11 W,
                                                                                                                                                    (voltage,frequency) pairs for the cores
for each core
C = 0.01 JK-1, G = 0.1 WK-1, βh = -4 W, βc = -8 W,
        α = 0.04 WK-1
for the bus
Cef = 10-8 JV-2same for the cores and the bus
κ(bus, ci) = 0.03 WK-1, κ(c1, c2) = κ(c3, c4) = 0.1 WK-1thermal conductivity
{(900 MHz, 1.20 V), (600 MHz, 1.10 V), (300 MHz, 1.06 V)}(voltage,frequency) pairs for the cores
{fmax = f3 = 1, f2 = 2/3, fmin = f1 = 1/3}scaling factors
(300 MHz, 1.06 V)(voltage,frequency) pair for the bus
fb = 1/3scaling factor
+ +Table 2: Parameter values. \ No newline at end of file diff --git a/samples/texts/4011427/page_23.md b/samples/texts/4011427/page_23.md new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/samples/texts/4011427/page_24.md b/samples/texts/4011427/page_24.md new file mode 100644 index 0000000000000000000000000000000000000000..79ad9e3ac3def8a81187dfa61147665b3d77796a --- /dev/null +++ b/samples/texts/4011427/page_24.md @@ -0,0 +1,13 @@ +## 5.1 Influence of the constraints on the schedules + +Fig. 8 has been obtained with an *Alg* graph consisting of 41 nodes, generated randomly with TGFF [40] (with the maximum value of in and out degree set to 4), and scheduled on the fully connected quad-core chip specified above. The nominal WCETs of the tasks are in the range [$5\ ms$, $15\ ms$] while the nominal WCCTs of the data-dependencies are in the range [$3\ ms$, $5\ ms$]$^3$. + +Fig. 8(a) shows the variation of the chip temperature in function of the execution time and the effect of the insertion of cooling times in the schedule, for two different values of the initial temperature $T_{init}$: 298 K and 357 K. In both cases, $T_{obj} = 360\ K$, $P_{obj} = 2\ W$, and $\Lambda_{obj} = 10^{-8}$. When $T_{init} = 298\ K$, the temperature increases steadily during a transient phase, and then stabilizes just below $T_{obj}$, by virtue of the cooling times. When $T_{init} = 357\ K$, the temperature remains just below $T_{obj}$ during the whole schedule, again by virtue of the cooling times. The initial temperature has a significant impact on the schedule length, from 451 ms for 298 K (indicated by the dashed vertical line) to 608 ms for 357 K, a 35% increase. + +Figure 8: (a) Evolution of the temperature when $T_{init} = 298\ K$ and $T_{init} = 357\ K$. (b) Evolution of the temperature of each component. + +Fig. 8(b) depicts the temperature variation of the five hardware components of the chip (bus, C1, C2, C3, and C4) during a schedule produced with the same parameters as Fig. 8(a). The temperatures of the four cores remain in a very small interval, [356 K, 360 K], demonstrating the effectiveness of our scheduling heuristic for the peak temperature. The bus temperature is significantly below for the simple reason that the bus is often idle. The fact that the temperature variations are very small, both over time and between the cores, is also very good to limit the aging of the chip [7]. + +Fig. 9 has been obtained with 50 DAGs generated randomly, each with 50 tasks having an $\mathcal{E}xe_{nom}$ in the range [3, 12], and such that the total sum of the $\mathcal{E}xe_{nom}$ of their tasks is in the range [540, 560]. Each point is the average value of the $C_{max}$ over the 50 DAGs and each vertical bar shows the range around the average value. The schedule length increases when $\Lambda_{obj}$ decreases (Fig. 9(a)). This is expected since more replications are required to satisfy the lower failure rate constraint: the two criteria are antagonistic. Moreover, the schedule length increases when $P_{obj}$ decreases (Fig. 9(b)). This is expected since lowering the power consumption requires lowering the frequencies used by the cores, which increases the execution time. Again, the two criteria are antagonistic. + +³From now on, the time unit will be the millisecond (ms). \ No newline at end of file diff --git a/samples/texts/4011427/page_25.md b/samples/texts/4011427/page_25.md new file mode 100644 index 0000000000000000000000000000000000000000..37b88c8b891f46e783fa832a61c1601354f61a11 --- /dev/null +++ b/samples/texts/4011427/page_25.md @@ -0,0 +1,15 @@ +Figure 9: (a) Influence of $\Lambda_{obj}$ and (b) of $P_{obj}$ on the execution time. + +## 5.2 Pareto fronts obtained with ERPOT + +In this Section, we compute the whole Pareto front for an *Alg* graph with 41 nodes onto the quad-core *Arc* graph of Fig. 3(b) with the parameters of Table 2. Ideally, we would like to visualize this Pareto front in 4D. However, when printed on paper, it is very hard to understand. To circumvent this difficulty, we show several Pareto fronts in 3D in the (execution time, GSFR, temperature) space and make it vary in the fourth dimension, the power consumption. We use 10 different values for each criterion. These threshold values must be provided by the user because they are application and platform dependent. + +• $\Lambda_{obj} \in \{10^{-9}, 3.16 \cdot 10^{-9}, 10^{-8}, \dots, 3.16 \cdot 10^{-5}\}$; + +• $P_{obj} \in \{1.3, 1.6, 1.9, \dots, 4.0\}$, in Watts; + +• $T_{obj} \in \{340, 345, 350, \dots, 385\}$, in Kelvin. + +Algorithm 2 implements the grid method for our four criteria. The function ERPOT with the parameters $\Lambda_i$, $P_j$, $T_k$ returns the Pareto point that minimizes the execution time under the constraints $\Lambda < L_i$, $P < P_j$, and $T < T_k$. Since $\Lambda_{obj}$ follows a logarithmic scale, $\Lambda_{incr}$ is used as a multiplier. + +Fig. 10 shows the resulting Pareto front in 3D for three different values of $P_{obj}$, 1.3 W, 2.5 W, and 4.0 W. A lower value of $P_{obj}$ implies higher values for the $C_{max}$. This is expected because the power consumption and the execution time are antagonistic. \ No newline at end of file diff --git a/samples/texts/4011427/page_26.md b/samples/texts/4011427/page_26.md new file mode 100644 index 0000000000000000000000000000000000000000..84d07739db8d8417a556cb5a4a1f656432f7c1f1 --- /dev/null +++ b/samples/texts/4011427/page_26.md @@ -0,0 +1,25 @@ +**Algorithm 2** Grid method algorithm for 4 criteria. + +**input:** The range [$\Lambda_{min}, \Lambda_{max}$] and the increment $\Lambda_{incr}$ +**input:** The range [$P_{min}, P_{max}$] and the increment $P_{incr}$ +**input:** The range [$T_{min}, T_{max}$] and the increment $T_{incr}$ +**output:** The list of Pareto points *Res* + +1: **function** GRID($\Lambda_{min}, \Lambda_{max}, \Lambda_{incr}, P_{min}, P_{max}, P_{incr}, T_{min}, T_{max}, T_{incr}$) +2:     Res ← ∅; $\Lambda_1$ ← $\Lambda_{min}$; i ← 1 +3: **while** $\Lambda_i \le \Lambda_{max}$ **do** +4:         $P_1$ ← $P_{min}$; j ← 1 +5:         **while** $P_j \le P_{max}$ **do** +6:             $T_1$ ← $T_{min}$; k ← 1 +7:         **while** $T_k \le T_{max}$ **do** +8:             Res ← Res ∪ ERPOT($\Lambda_i, P_j, T_k$) +9:         $T_k$ ← $T_k + T_{incr}$ +10: **end while** +11:     $P_j$ ← $P_j + P_{incr}$ +12: **end while** +13: $\Lambda_i$ ← $\Lambda_i \times \Lambda_{incr}$ +14: **end while** +15: **return** REMOVENONDOMINATEDPOINTS(*Res*) +16: **end function** + +Figure 10: Pareto fronts in 3D for three different values of $P_{obj}: 1.3 W$, $2.5 W$, and $4.0 W$. \ No newline at end of file diff --git a/samples/texts/4011427/page_27.md b/samples/texts/4011427/page_27.md new file mode 100644 index 0000000000000000000000000000000000000000..d1d252c85d9b8a9f29d7dde46bfd9c5b9bf7af46 --- /dev/null +++ b/samples/texts/4011427/page_27.md @@ -0,0 +1,19 @@ +
Benchmarkautomotiveconsumernetworkingofficetelecomrandomrandomrandomrandom
DAG size2412135304050607080
ERPOT (cells)100100100100100100100100100100
PowerPerf-PET (cells)7463878657
Cmax improvement (%)27.6436.8838.3039.9229.3330.8135.8234.5931.3230.67
+ +Table 3: ERPOT vs. PowerPerf-PET: ERPOT systematically outperforms PowerPerf-PET on the $C_{max}$ (by an average of 33.5%). + +As expected also, when $T_{obj}$ decreases, the $C_{max}$ increases because more cooling times must be inserted and lower frequencies are chosen. For instance, in Fig. 10(a), at 360 K the $C_{max}$ varies in the range [866 ms, 1038 ms], while at 340 K it varies in the range [1041 ms, 1222 ms]. + +When $\Lambda_{obj}$ increases, the $C_{max}$ increase because more tasks must be replicated to compensate for the higher failure rate. Again this is expected because these two criteria are antagonistic. For instance, in Fig. 10(c), at $10^{-5}$ failures per ms, the $C_{max}$ varies in the range [341 ms, 498 ms], while it varies in [403 ms, 594 ms] at $10^{-9}$. + +## 5.3 Comparison with PowerPerf-PET + +We have also compared ERPOT with the PowerPerf-PET heuristic from [4] (Algorithm 7) but without considering the reliability since PowerPerf-PET does not address this criterion. PowerPerf-PET uses two separate cost functions to select the core and the frequency to execute the current task. To select the core, it evaluates, for each task $\tau_i$, the product of the total power consumption of each core before mapping $\tau_i$, and its earliest possible available time for executing $\tau_i$. The task is allocated to the core having the minimum value of this product (the “PowerPerf” part). Then, to select the frequency, it uses a weighted sum of the performance $P$, the energy $E$, and the temperature $T$ (the “PET” part). In contrast, we use a unique cost function to select the core, its frequency, and the length of the cooling time (if any). Regarding the cooling times, PowerPerf-PET never inserts one. Moreover, the temperature model of PowerPerf-PET is based on measurement rather than an analytic model based on the differential heat propagation equation, and it does not take into account the heat propagation from the neighbor cores. Similarly, the power consumption model is based on measurement, and the effect of the temperature on the power consumption is not taken into account. + +As application graphs, we choose the five benchmarks from the E3S suite [41] and five DAGs randomly generated with TGFF [40] (with the maximum value of in and out degree set to 4). For each DAG, the target architecture is the quad-core platform of Section 5.1. For ERPOT, we take the following values for the $P_{obj}$ and $T_{obj}$ constraints, resulting in 100 points in each Pareto front: + +* $P_{obj} \in \{1.3, 1.6, 1.9, \dots, 4.0\}$, in Watts; + +* $T_{obj} \in \{340, 345, 350, \dots, 385\}$, in Kelvin. + +For PowerPerf-PET, the three weights of the “PET” weighted sum are each taken in the interval $[0, 1]$ with a 0.01 increment. \ No newline at end of file diff --git a/samples/texts/4011427/page_28.md b/samples/texts/4011427/page_28.md new file mode 100644 index 0000000000000000000000000000000000000000..4c6ad962f4202579f17e3d009a4b82a8ce044b40 --- /dev/null +++ b/samples/texts/4011427/page_28.md @@ -0,0 +1,19 @@ +Thanks to the grid method, ERPOT produces one Pareto point in each cell of this 2D space. This is not the case of PowerPerf-PET because it relies on the transformation method with a weighted sum. As explained in Section 2, this does not allow exploring the entire search space. Table 3 reports the number of cells for which each algorithm succeeds in finding a valid schedule. In each cell of the grid containing a solution from ERPOT and from PowerPerf-PET, we compute the percentage between the $C_{max}$ of these two schedules as: + +$$ \frac{C_{max}(\text{PowerPerf-PET}) - C_{max}(\text{ERPOT})}{C_{max}(\text{PowerPerf-PET})} \times 100 \quad (43) $$ + +Finally, we compute the average of these percentages over all the suitable cells of the 2D space. The results are reported in Table 3. ERPOT systematically outperforms PowerPerf-PET by at least 27%. Several reasons explain this significant difference. First, PowerPerf-PET is based on a weighted sum of its three criteria *P*, *E*, and *T*. This does not allow the concave parts of the Pareto front to be found (see Fig. 1(b)). As a consequence, PowerPerf-PET computes the convex hull of the Pareto front while ERPOT computes the actual one, including its concave parts. Second, ERPOT uses a smart cost function to sort the ready tasks, taking into account for each task $\tau$ the longest execution path from $\tau$ to the end of the graph (see Section 4.2). This allows us to schedule first the tasks that are in the critical path, which reduces the overall execution time. + +## 5.4 Evaluation of the ILP model + +We have implemented our ILP program (see Section 4.6) in the CPLEX ILOG solver [42] – version 12.6.3, and we have run it on an Intel quad-core i5 CPU with 6GB RAM. It returns the optimal result, i.e., the schedule with the minimal execution time under the $\Lambda_{obj}$, $P_{obj}$, and $T_{obj}$ constraints. The drawback is that the complexity of finding this optimal schedule is exponential in the size of the problem instance (number of tasks of the Alg graph plus number of cores times number of frequencies). To be specific, our ILP program was not able to complete its execution for DAGs larger than 9 tasks because the CPLEX solver ran out of memory. + +We have run our ILP program on 10 DAGs randomly generated with TGFF [40], each with 8 tasks, and a homogeneous dual-core with a single bus with three frequency/voltage levels. The WCETs of the tasks are randomly chosen in the range [3 ms, 12 ms] while the WCCTs are randomly chosen in the range [2 ms, 4 ms]. Besides, the cooling times are limited to 1 ms. Finally, ten different values of each criterion $\Lambda_{obj}$, $P_{obj}$, and $T_{obj}$ are considered (as in Section 5.2). + +For each DAG, we build the full 4D Pareto front with ERPOT and with the ILP program, and in each cell we compute the percentage between the $C_{max}$ of these two schedules as: + +$$ \frac{C_{max}(\text{ERPOT}) - C_{max}(\text{ILP})}{C_{max}(\text{ERPOT})} \times 100 \quad (44) $$ + +For each Pareto front, we compute the minimum, maximum, and average difference between the two solutions. Table 4 summarizes the results. On average, the length of the non optimal schedule obtained with our ERPOT heuristic is between 8% and 10% above the length of the optimal schedule obtained with the ILP program, which we claim is not too bad. However, recall that the ILP solver can only compute the Pareto front for very small DAGs, no larger than 8 tasks. + +Finally, Figure 11 plots the percentage of the $C_{max}$ between the two Pareto fronts generated by ERPOT and by the ILP program for the DAG # 6 from Table 4. The largest deviations between ERPOT and ILP occur when the $\Lambda_{obj}$, $P_{obj}$, and $T_{obj}$ constraints are the more stringent. The reason is that the ILP program makes better choices between inserting cooling times and lowering the frequency/voltage. \ No newline at end of file diff --git a/samples/texts/4011427/page_29.md b/samples/texts/4011427/page_29.md new file mode 100644 index 0000000000000000000000000000000000000000..86b51f360405d0c1f3f45daad47c7f8f7a8b2822 --- /dev/null +++ b/samples/texts/4011427/page_29.md @@ -0,0 +1,13 @@ +
DAG12345678910
min (%)3.202.903.123.082.893.842.714.163.943.30
max (%)23.7424.1025.6022.3824.6725.1025.4723.7422.6926.38
avg. (%)8.389.268.178.558.499.439.079.948.929.33
+ +Table 4: ERPOT vs. ILP: ILP systematically outperforms ERPOT on the $C_{max}$ (by an average of 8.95%). + +Figure 11: Heatmap of the percentage between the $C_{max}$ obtained by ERPOT and by the ILP program ($\Lambda_{obj} = 10^{-6}$). + +# 6 Related work + +Several related works optimize the executing time, reliability, power consumption, and temperature for applications running on multicores. Most of them consider only two criteria, the execution time and one of the other three criteria. A few related work consider three criteria. However, no existing results consider the four criteria altogether. + +Many results address the problem in the context of applications modeled as a set of real-time tasks, usually pre-emptible, and scheduled by a real-time operating system (RTOS) according to some priority policy (see e.g. [43, 44, 45] to cite only a few). Each task $\tau_i$ is defined by a tuple $(A_i, C_i, D_i, \pi_i)$, where $A_i$ is the arrival time (defined either according to a periodic or sporadic activation model), $C_i$ is the worst-case execution time, $D_i$ is the deadline, and $\pi_i$ is the priority. Since our application model is totally different, we do not detail these works. + +In [5], Girault and Kalla present a bi-criteria optimization ready list heuristic algorithm to schedule a DAG of tasks onto a heterogeneous multi-core processor. The algorithm minimizes both the total execution time and the soft error rate. Instead of directly using the system's reliability as an optimization criterion, the authors introduce a new criterion called the Global System Failure Rate (GSFR). The GSFR is computed based on the system's reliability (i.e., the reliability of the schedule on the multicore, computed with classical reliability techniques such as reliability block diagrams) and on the total execution time. The main advantage of the \ No newline at end of file diff --git a/samples/texts/4011427/page_3.md b/samples/texts/4011427/page_3.md new file mode 100644 index 0000000000000000000000000000000000000000..e707d26efa16e914e7c43e7bc3479f8770f35494 --- /dev/null +++ b/samples/texts/4011427/page_3.md @@ -0,0 +1,11 @@ +reason that the number of Pareto points is not bounded). + +Instead, for each of the $m-1$ criteria turned into a constraint, we simply divide the useful range of this criterion into $p$ equally spaced intervals, and we invoke a single criterion optimization algorithm in each of the resulting $p^{m-1}$ zones of the search space. We call this the Grid Method, depicted in Fig. 2(a) where the $Z_1$ axis is divided into 4 intervals, [$K_1^5, K_1^4$) to [$K_1^2, K_1^1$). The resulting complexity therefore becomes $p^{m-1}\mathcal{O}(opt)$. This is still exponential in $m-1$ but the number of intervals $p$ is much less than the number of Pareto points $k$. The number of intervals can be identical for each of the $m-1$ criteria or not: each range can thus be divided into $p_i$ intervals (not even necessarily equally spaced), resulting in an overall complexity of ($\prod_{i=1}^{m-1} p_i)\mathcal{O}(opt)$. + +Figure 2: The grid method to compute the Pareto front (2D case): (a) with a coarse regular grid; (b) with an irregular grid. + +The choice of the intervals in each dimension has obviously an impact on the resulting Pareto front. For instance, the grid method illustrated in Fig 2(a) builds a Pareto front that does not include the point $x_4$ because, in the interval [$K_1^4, K_1^3$) (emphasized in pink), the point that minimizes $Z_2$ is $x_3$. With a different grid, the point $x_4$ could be obtained, as shown in Fig. 2(b). On the one hand, using a finer grid will produce a Pareto front with more points, but this can become too costly. On the other hand, using an irregular grid could find more Pareto points but this seems very difficult to control a priori. It is simpler to generate the Pareto front with evenly spaced intervals in each dimension (the number of intervals depending on the time one is ready to spend to compute the Pareto front) and then, in order to improve locally the Pareto front around a particular Pareto optimum, to use either local search methods or to refine the intervals locally around this Pareto optimum. For instance, in Fig. 1, if $x_3$ is identified as an interesting compromise, then the user can either use a local search algorithm around $x_3$, or he/she can divide the [$K_1^4, K_1^3$) interval into smaller intervals and invoke again the $Z_2$ minimization function in these smaller intervals, which will be very likely to find the Pareto optimum $x_4$. + +To summarize, we use Algorithm 1 to implement the grid method, in the particular case of 2 criteria as in Fig. 2(b). The function OPT($K_1^i$) returns the Pareto point that minimizes $Z_2$ under the constraint $Z_1 < K_1^i$. The function REMOVE NON DOMINATED POINTS(Res) removes the non-dominated points from the list Res to produce the Pareto front. + +As a final remark, note that in Fig. 1 and in Fig. 2, the Pareto front is depicted as a solid green line. It delimits the portion of the plane (above it and on its right) where all the points are dominated by a known Pareto optimum. This differs from the broken line that connects the Pareto optima (depicted in dotted blue), as is demonstrated by Fig. 1(b): the point $x_4$ is above the dotted blue line, and yet we do not know whether or not represents a feasible compromise \ No newline at end of file diff --git a/samples/texts/4011427/page_30.md b/samples/texts/4011427/page_30.md new file mode 100644 index 0000000000000000000000000000000000000000..4da270ccaeb55e59dfcb56b47ff0d3edda4deca2 --- /dev/null +++ b/samples/texts/4011427/page_30.md @@ -0,0 +1,9 @@ +GSFR over the reliability is that it is an *invariant* measure of the schedule (see Section 3.3 for details), which makes it suitable to use the transformation method. This allows the authors to use the transformation method to compute the Pareto front in the 2D space (execution time, GSFR). This method has been generalized in [2] by Assayad et al. to take into account the power consumption, therefore providing a tri-criteria list scheduling algorithm to optimize the execution time, the GSFR, and the power consumption. The effect of voltage and frequency on the failure rate per time unit of the cores is taken into account. However, it does not take into account the temperature of the cores. ERPOT extends [2] precisely to take into account the temperature. + +In [7], Das et al. propose a bi-criteria genetic algorithm to schedule a DAG of tasks onto a set of identical cores interconnected in a mesh network topology. The algorithm maximizes two criteria, (i) the system reliability (called the “performability”) and (ii) the lifetime, under a given energy constraint $E_{max}$ and a given latency constraint $P_{max}$ (the latency is incorrectly denoted “period”). DVFS is used to lower the total energy consumed. The reliability model, a variant of the Poisson model of Shatz and Wang [24], takes into account the voltage/frequency effect on the failure rate, as in [10]. The lifetime is computed by taking into account failures due to electromigration (EM), with a Weibull distribution. In order to improve the system reliability and the lifetime, some tasks are chosen to be actively replicated; this choice is made by the genetic algorithm. The result is a set of non-dominated solutions in the 2D space (reliability, lifetime). The reliability is improved by increasing the number of replicas, while the lifetime is improved by lowering the temperature. It follows that increasing the number of replicas increases the chip temperature, which in turn decreases the lifetime; in this sense the two criteria are antagonistic. However, due to the very high cost of the genetic algorithm, only small DAGs can be scheduled (up to 20 tasks), while we are able to handle DAGS of size greater than 100 tasks. Besides, the effect of the chip temperature on the lifetime and on the reliability is not taken into account. Finally, the leakage power is ignored. + +In [4], Sheikh and Ahmad address the PETOS problem (Performance, Energy, and Temperature Optimized Scheduling), where a DAG of tasks must be scheduled onto a set of *M* parallel cores operating under *K* available frequency levels. Because large DAGs are considered, only heuristic algorithms can be used (i.e., neither ILP nor exhaustive search algorithms). The authors propose 16 different heuristic, which are classified according to (i) the core selection strategy and (ii) the frequency selection strategy. None of the heuristic algorithms proposed in [4] is able to optimize the reliability, including PowerPerf-PET already described in Section 5.3. + +In [9], Xie et al. present an energy-efficient fault-tolerant list scheduling heuristic. The application model is a DAG of tasks, and the architecture model is a distributed memory multi-processor with a CAN network. Processors are heterogeneous and equipped with DVFS, but leakage power is ignored. The reliability model is the Poisson model of Shatz and Wang [24], and the effect of DVFS on the failure rate per time unit is taken into account as in [10], but not the temperature. Active replication is applied to each task of the DAG so as to satisfy a given reliability goal for the resulting system (e.g., 0.99). When doing so, the frequencies of the processors the task is mapped to are taken into account. The proposed heuristic minimizes the total energy under this reliability constraint, but the authors do not compute Pareto fronts. + +Finally, Papadimitriou and Yannakakis address the problem of computing an *approximate* Pareto front in the case where the size of the exact Pareto front (i.e., the number of Pareto optima) is exponential in the size of the problem instance. They define the notion of $\varepsilon$-approximated Pareto front, for which each point is at most at a distance $\varepsilon$ from an optimal Pareto point in each dimension (using the $L_\infty$ norm). For this reason, the size of the $\varepsilon$-approximate Pareto front is much smaller than that of the exact Pareto front. The authors prove that for any *n*-criteria optimization problem, any problem instance *x*, and any $\varepsilon$, there exists an approximate $\varepsilon$-approximate Pareto front the size of which is polynomial in the size of *x* and in $1/\varepsilon$, but \ No newline at end of file diff --git a/samples/texts/4011427/page_31.md b/samples/texts/4011427/page_31.md new file mode 100644 index 0000000000000000000000000000000000000000..0baa7a9015d1d1e630572f7661a545e408656fea --- /dev/null +++ b/samples/texts/4011427/page_31.md @@ -0,0 +1,15 @@ +exponential in $n$. + +# 7 Conclusion + +We have presented a novel quad-criteria distributed scheduling heuristic called ERPOT (for Execution time, Reliability, Power consumption and Temperature), which minimizes the schedule length under three constraints: the power consumption, the maximal temperature, and the Global System Failure Rate (GSFR, which generalizes the classical failure rate per time unit of hardware elements to a whole schedule on a multicore architecture). These four criteria are all crucial to optimize embedded systems. By varying the three constraints and repeatedly invoking our ERPOT heuristic, we are able to compute the whole Pareto front in the 4D space (execution time, failure rate, power, temperature). + +Using ERPOT in practice involves (i) modelling the application as a DAG of tasks, (ii) evaluating the WCET of each task with a dedicated tool, (iii) gathering all the parameter values from the chip, (iv) building the Pareto front, and (v) choosing one solution from the Pareto front according to the application and user constraints. + +The failure rate constraint is met by adding active replica in the schedule. Hence the failure rate and the schedule length are antagonistic criteria. The power consumption constraint is met by using Dynamic Voltage and Frequency Scaling (DVFS). Hence the schedule length and the power consumption are antagonistic criteria. Finally, the temperature constraint is met by inserting cooling times in the schedule (but also by lowering the voltage). Hence the schedule length and the temperature are antagonistic criteria. + +The antagonisms between the criteria already make the scheduling problem very complex. Moreover, there are other interplays that must also be taken into account. For instance, lowering the voltage makes the hardware sensitive to lower energy particles, thereby increasing the nominal failure rates of the hardware components of the target architecture. ERPOT is the first scheduling heuristic able to take into account all those antagonisms. + +Extensive experimental results show that our scheduling heuristic works very well: (i) on small application graphs, ERPOT is outperformed on average by less than 10% by an ILP program that produces the *optimal* Pareto fronts; (ii) on large application graphs, both synthetic and real-life, ERPOT outperforms the PowerPerf-PET scheduling heuristic on average by 33%. The largest deviations between ERPOT and ILP occur when the $\Lambda_{obj}$, $P_{obj}$, and $T_{obj}$ constraints are more stringent. The reason is that the ILP program makes better choices between inserting cooling times and lowering the frequency/voltage. This hints at potential avenues for future improvements of ERPOT. + +It is tempting to extend our method to a general N-constraints method. However, in the context of real-time embedded systems, we believe that it is not possible. First, the constraints are inter-dependent, as evidenced by Eq. (6). The only possibility to get a general N-constraint method would be if the constraints were independent of each other. Second, incorporating the power consumption into our method required inserting virtual tasks in the schedule to fill holes, in order to avoid under-estimating the power consumption (see Sec. 4.3). Third, a similar issue emerged when incorporating the temperature, which required us to also insert virtual tasks to avoid under-estimating the peak temperature (see Sec. 4.5). Although it may seem that the solution is identical for the power consumption and temperature, this is not the case. As a matter of fact, keeping the system under a temperature threshold also requires inserting cooling times in the schedule. \ No newline at end of file diff --git a/samples/texts/4011427/page_32.md b/samples/texts/4011427/page_32.md new file mode 100644 index 0000000000000000000000000000000000000000..d9b830b3c1b96517d5ec4a9e01bbec4a0d155e08 --- /dev/null +++ b/samples/texts/4011427/page_32.md @@ -0,0 +1,35 @@ +# References + +[1] L. Torres, P. Benoit, G. Sassatelli, M. Robert, F. Clermidy, and D. Puschini, "An introduction to multi-core system on chip – trends and challenges," in *Multiprocessor System-on-Chip*, pp. 1–21, Springer Science Business Media, Nov. 2010. + +[2] I. Assayad, A. Girault, and H. Kalla, "Tradeoff exploration between reliability, power consumption, and execution time for embedded systems," *International Journal on Software Tools for Technology Transfer*, vol. 15, no. 3, pp. 229–245, 2013. + +[3] Joint Electron Device Engineering Council, "Failure mechanisms and models for semiconductor devices," Tech. Report JEP 122-H, JEDEC, Aug. 2016. + +[4] H. Sheikh and I. Ahmad, "Sixteen heuristics for joint optimization of performance, energy, and temperature in allocating tasks to multi-cores," *ACM Trans. on Parallel Computing*, vol. 3, pp. 1–29, Aug. 2016. + +[5] A. Girault and H. Kalla, "A novel bicriteria scheduling heuristics providing a guaranteed global system failure rate," *IEEE Trans. on Dependable and Secure Computing*, vol. 6, pp. 241–254, Oct. 2009. + +[6] R. Viswanath, V. Wakharkar, A. Watwe, and V. Lebonheur, "Thermal performance challenges from silicon to systems," *Intel Technology Journal*, vol. Q3, 2000. + +[7] A. Das, A. Kumar, B. Veeravalli, C. Bolchini, and A. Miele, "Combined DVFS and mapping exploration for lifetime and soft-error susceptibility improvement in MPSoCs," in *Design, Automation & Test in Europe, DATE'14*, (Dresden, Germany), Mar. 2014. + +[8] P. Kumar and L. Thiele, "Thermally optimal stop-go scheduling of task graphs with real-time constraints," in *Asia and South Pacific Design Automation Conference, ASP-DAC'11*, IEEE, Jan. 2011. + +[9] G. Xie, Y. Chen, and X. Xiao, "Energy-efficient fault-tolerant scheduling of reliable parallel applications on heterogeneous distributed embedded systems," *IEEE Trans. on Sustainable Computing*, June 2017. + +[10] D. Zhu, R. G. Melhem, and D. Mossé, "The effects of energy management on reliability in real-time embedded systems," in *ICCAD'04*, pp. 35–40, IEEE / ACM, 2004. + +[11] M. Andjelkovic, M. Krstic, R. Kraemer, V. Veeravalli, and A. Steininger, "A critical charge model for estimating the SET and SEU sensitivity: A muller C-element case study," in *Asian Test Symposium, ATS'17*, pp. 82–87, IEEE, Nov. 2017. + +[12] Y. Haimes, L. Lasdon, and D. Wismer, "On a bicriterion formulation of the problems of integrated system identification and system optimization," *IEEE Trans. Systems, Man, and Cybernetics*, vol. 1, pp. 296–297, 1971. + +[13] M. Laumanns, L. Thiele, and E. Zitzler, "An efficient, adaptive parameter variation scheme for meta-heuristics based on the epsilon-constraint method," *European J. of Operational Research*, vol. 169, no. 3, pp. 932–942, 2006. + +[14] Y. Xie and W.-L. Hung, "Temperature-aware task allocation and scheduling for embedded multiprocessor systems-on-chip (MPSoC) design," *The Journal of VLSI Signal Processing*, vol. 45, pp. 177–189, Dec. 2006. + +[15] M. Garey and D. Johnson, *Computers and Intractability, a Guide to the Theory of NP-Completeness*. San Francisco: W.H. Freeman Company, 1979. + +[16] V. T'Kindt and J. Billaut, *Multicriteria Scheduling – Theory, Models and Algorithms*. Springer, 2006. + +[17] I. Das and J. Dennis, "Normal-boundary intersection: A new method for generating the Pareto surface in nonlinear multicriteria optimization problems," *SIAM J. Opt.*, vol. 8, pp. 631–657, Mar. 1998. \ No newline at end of file diff --git a/samples/texts/4011427/page_33.md b/samples/texts/4011427/page_33.md new file mode 100644 index 0000000000000000000000000000000000000000..1a4bb5e2b39a38715dc21f0724a5f95ed75e2d06 --- /dev/null +++ b/samples/texts/4011427/page_33.md @@ -0,0 +1,39 @@ +[18] P. Cousot and R. Cousot, "Abstract interpretation: A unified lattice model for static analysis of programs by construction or approximation of fixpoints," in *Principles of Programming Languages, POPL '77*, (Los Angeles (CA), USA), ACM SIGPLAN, Jan. 1977. + +[19] A. Colin and I. Puaut, "Worst case execution time analysis for a processor with branch prediction," *Real-Time Systems*, vol. 18, no. 2/3, pp. 249-274, 2000. + +[20] H. Theiling, C. Ferdinand, and R. Wilhelm, "Fast and precise WCET prediction by separate cache and path analyses," *Real-Time Systems*, vol. 18, pp. 157-179, May 2000. + +[21] S. Altmeyer, R. Davis, L. Indrusiak, C. Maiza, V. Nélis, and J. Reineke, "A generic and compositional framework for multicore response time analysis," in *International Conference on Real Time Networks and Systems, RTNS'15*, (Lille, France), pp. 129-138, ACM, Nov. 2015. + +[22] H. Rihani, M. Moy, C. Maiza, R. Davis, and S. Altmeyer, "Response time analysis of synchronous data flow programs on a many-core processor," in *International Conference on Real-Time Networks and Systems, RTNS'16*, (Brest, France), pp. 67-76, ACM, Oct. 2016. + +[23] R. Davis, S. Altmeyer, L. Indrusiak, C. Maiza, V. Nelis, and J. Reineke, "An extensible framework for multicore response time analysis," *Real-Time Syst.*, vol. 54, no. 3, pp. 607-661, 2018. + +[24] S. Shatz and J.-P. Wang, "Models and algorithms for reliability-oriented task-allocation in redundant distributed-computer systems," *IEEE Trans. Reliability*, vol. 38, pp. 16-26, Apr. 1989. + +[25] A. Avizienis, J.-C. Laprie, B. Randell, and C. Landwehr, "Basic concepts and taxonomy of dependable and secure computing," *IEEE Trans. on Dependable and Secure Computing*, vol. 1, pp. 11-33, Jan. 2004. + +[26] H. Balaban, "Some effects of redundancy on system reliability," in *National Symposium on Reliability and Quality Control*, (Washington (DC), USA), pp. 385-402, Jan. 1960. + +[27] D. Rossi, M. Omana, C. Metra, and A. Paccagnella, "Impact of bias temperature instability on soft error susceptibility," *IEEE Trans. Very Large Scale Integration Systems*, vol. 23, pp. 743-751, apr 2015. + +[28] A. M. Fard, M. Ghasemi, and M. Kargahi, "Response-time minimization in soft real-time systems with temperature-affected reliability constraint," in *2015 CSI Symposium on Real-Time and Embedded Systems and Technologies, RTEST'15*, IEEE, oct 2015. + +[29] S. Hsueh, R. Huang, and C. Wen, "TASSER: A temperature-aware statistical soft-error-rate analysis framework for combinational circuits," in *Fifteenth International Symposium on Quality Electronic Design*, IEEE, mar 2014. + +[30] J. Srinivasan, S. V. Adve, P. Bose, and J. A. Rivers, "Exploiting structural duplication for lifetime reliability enhancement," in *ISCA*, pp. 520-531, IEEE, 2005. + +[31] M. Moy, C. Helmstetter, T. Bouhadiba, and F. Maraninchi, "Modeling power consumption and temperature in TLM models," *Leibniz T. on Embedded Systems*, vol. 3, no. 1, pp. 1-29, 2016. + +[32] F. Kreith, *CRC Handbook of Thermal Engineering*. Mechanical and Aerospace Engineering Series, CRC Press, 1999. + +[33] T. Chantem, R. P. Dick, and X. S. Hu, "Temperature-aware scheduling and assignment for hard real-time applications on MPSoCs," *IEEE Trans. Very Large Scale Integration Systems*, vol. 19, no. 10, pp. 1884-1897, 2011. + +[34] J. Knight and N. Leveson, "An experimental evaluation of the assumption of independence in multi-version programming," *IEEE Trans. Software Engin.*, vol. 12, no. 1, pp. 96-109, 1986. + +[35] P. Jensen and M. Bellmore, "An algorithm to determine the reliability of a complex system," *IEEE Trans. Reliability*, vol. 18, pp. 169-174, Nov. 1969. + +[36] A. S. Hartman, D. E. Thomas, and B. H. Meyer, "A case for lifetime-aware task mapping in embedded chip multiprocessors," in *Proceedings of the eighth IEEE/ACM/IFIP international conference on Hardware/software codesign and system synthesis, CODES/ISSS'10*, ACM, 2010. + +Inria \ No newline at end of file diff --git a/samples/texts/4011427/page_34.md b/samples/texts/4011427/page_34.md new file mode 100644 index 0000000000000000000000000000000000000000..f908647872c262241bee515f4e39c026d2cab95e --- /dev/null +++ b/samples/texts/4011427/page_34.md @@ -0,0 +1,15 @@ +# ERPOT: A quad-criteria scheduling heuristic to optimize the execution time, failure rate, power consumption and temperature in multicores + +ABDI Athena*, GIRAULT Alain†, ZARANDI Hamid‡ + +Project-Teams Spades + +Research Report n° 9196-v2 — March 2019 — 37 pages + +Version 2. + +* Department of Computer Engineering and Information Technology, Amirkabir University of Technology (Tehran Polytechnic), Tehran, Iran. + +† Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LIG, 38000 Grenoble, France. + +‡ Department of Computer Engineering and Information Technology, Amirkabir University of Technology (Tehran Polytechnic), Tehran, Iran. \ No newline at end of file diff --git a/samples/texts/4011427/page_35.md b/samples/texts/4011427/page_35.md new file mode 100644 index 0000000000000000000000000000000000000000..566bc00fb514b908f9abe4366393f702236ac694 --- /dev/null +++ b/samples/texts/4011427/page_35.md @@ -0,0 +1,17 @@ +[37] Z. Wang, S. Ranka, and P. Mishra, "Efficient task partitioning and scheduling for thermal management in multicore processors," in *ISQED'15*, IEEE, 2015. + +[38] Y.-K. Kwok and I. Ahmad, "Static scheduling algorithms for allocating directed task graphs to multiprocessors," *ACM Computing Surveys*, vol. 31, no. 4, pp. 406-471, 1999. + +[39] B. McCarl and T. Sreen, *Applied Mathematical Programming Using Algebraic Systems*. College Station (TX), USA: Texas A&M University, 2007. + +[40] "Task graphs for free." http://ziyang.eecs.umich.edu/~dickrp/tgff. Accessed: 2016-10-22. + +[41] "Embedded microprocessor benchmark consortium." http://www.eembc.org. + +[42] IBM, "ILOG CPLEX optimizer." https://www-01.ibm.com/software/commerce/optimization/cplex-optimizer. Accessed: 2016-10-22. + +[43] L. Huang, F. Yuan, and Q. Xu, "Lifetime reliability-aware task allocation and scheduling for MPSoC platforms," in *Design Automation and Test in Europe Conference, DATE'09*, (Nice, France), pp. 51-56, Mar. 2009. + +[44] X. Qin, W. Wang, and P. Mishra, "TCEC: Temperature and energy-constrained scheduling in real-time multitasking systems," *IEEE Trans. Computer-Aided Design of Integrated Circuits and Systems*, vol. 31, pp. 1159-1168, Aug. 2012. + +[45] Y. Ma, T. Chantem, and R. P. Dick, "Improving system-level lifetime reliability of multicore soft real-time systems," *IEEE Trans. Very Large Scale Integration Systems*, vol. 25, pp. 1895-1905, Mar. 2017. \ No newline at end of file diff --git a/samples/texts/4011427/page_36.md b/samples/texts/4011427/page_36.md new file mode 100644 index 0000000000000000000000000000000000000000..0be97533874a0a08a652c031f79b2d7bf9ed0465 --- /dev/null +++ b/samples/texts/4011427/page_36.md @@ -0,0 +1,16 @@ +in informatics mathematics + +# RESEARCH CENTRE +GRENOBLE - RHÔNE-ALPES + +Inovallée +655 avenue de l'Europe Montbonnot +38334 Saint Ismier Cedex + +Publisher +Inria +Domaine de Voluceau - Rocquencourt +BP 105 - 78153 Le Chesnay Cedex +inria.fr + +ISSN 0249-6399 \ No newline at end of file diff --git a/samples/texts/4011427/page_37.md b/samples/texts/4011427/page_37.md new file mode 100644 index 0000000000000000000000000000000000000000..955891be5fc6bb40dd02abfc9871a05753ccddcb --- /dev/null +++ b/samples/texts/4011427/page_37.md @@ -0,0 +1,3 @@ +**Abstract:** We investigate multi-criteria optimization and Pareto front generation. Given an application modeled as a Directed Acyclic Graph (DAG) of tasks and a multicore architecture, we produce a set of non-dominated (in the Pareto sense) static schedules of this DAG onto this multicore. The criteria we address are the execution time, reliability, power consumption, and peak temperature. These criteria exhibit complex antagonistic relations, which make the problem challenging. For instance, improving the reliability requires adding some redundancy in the schedule, which penalizes the execution time. To produce Pareto fronts in this 4-dimension space, we transform three of the four criteria into constraints (the reliability, the power consumption, and the peak temperature), and we minimize the fourth one (the execution time of the schedule) under these three constraints. By varying the thresholds used for the three constraints, we are able to produce a Pareto front of non-dominated solutions. We propose two algorithms to compute static schedules. The first is a ready list scheduling heuristic called ERPOT (Execution time, Reliability, Power consumption and Temperature). ERPOT actively replicates the tasks to increase the reliability, uses Dynamic Voltage and Frequency Scaling to decrease the power consumption, and inserts cooling times to control the peak temperature. The second algorithm uses an Integer Linear Programming (ILP) program to compute an optimal schedule. However, because our multi-criteria scheduling problem is NP-complete, the ILP algorithm is limited to very small problem instances. Comparisons showed that the schedules produced by ERPOT are on average only 10% worse than the optimal schedules computed by the ILP program, and that ERPOT outperforms the PowerPerf-PET heuristic from the literature on average by 33%. + +**Key-words:** Multicore static scheduling, reliability, failure rate, power consumption, temperature, multi-objective optimization, Pareto front. \ No newline at end of file diff --git a/samples/texts/4011427/page_38.md b/samples/texts/4011427/page_38.md new file mode 100644 index 0000000000000000000000000000000000000000..1f1886b11068bd8305c60aa147c5727560c61c51 --- /dev/null +++ b/samples/texts/4011427/page_38.md @@ -0,0 +1,5 @@ +# ERPOT: Une heuristique d'ordonnancement quadri-critère pour optimiser le temps d'exécution, le taux de défaillance, la puissance électrique et la température sur les multi-cœurs + +**Résumé** : Nous nous attaquons à l'optimisation multi-critères et à la génération de fronts de Pareto. Etant données une application modélisée sous la forme d'un graphe orienté sans cycle (DAG) de tâches et une architecture multi-cœurs, nous calculons un ensemble d'ordonnancements statiques non dominés (au sens de Pareto) de ce DAG sur ce multi-cœurs. Les critères que nous considérons sont le temps d'exécution, la fiabilité, la puissance électrique et la température de crête. Ces critères présentent des relations complexes d'antagonisme, ce qui fait de notre problème d'ordonnancement un vrai défi. Par exemple, améliorer la fiabilité requiert d'ajouter de la redondance dans l'ordonnancement, ce qui pénalise le temps d'exécution. Afin de produire des fronts de Pareto dans cet espace à quatre dimensions, nous transformons trois de ces quatre critères en contraintes (la fiabilité, la puissance électrique et la température de crête) et nous minimisons le quatrième (le temps d'exécution) sous ces trois contraintes. En faisant varier les seuils utilisés pour les trois contraintes, nous sommes capables de produire un front de Pareto de solutions non-dominées. Nous proposons deux algorithmes pour calculer des ordonnancements statiques. Le premier est une heuristique de liste appelé ERPOT (Execution time, failure Rate, Power consumption and Temperature). ERPOT réplique activement la tâches pour améliorer la fiabilité, utilise l'Ajustement Dynamique de la Fréquence et de la Tension (ADFT) pour réduire la puissance électrique, et insère des intervalles d'inactivité pour contrôler la température de crête. Le second algorithme repose sur un Programme Linéaire en Nombres Entiers (PLNE) pour construire un ordonnancement optimal. Toutefois, dans la mesure où notre problème d'ordonnancement multi-critères est NP-complet, l'algorithme PLNE est limité à des instances de très petite taille. Les comparaisons montrent que les ordonnancements produits par ERPOT sont en moyenne 10% moins bons que les ordonnancements optimaux calculés par l'algorithme PNLE, et que ERPOT améliore en moyenne de 33% les ordonnancements produit par l'heuristique PowerPerf-PET de la littérature. + +**Mots-clés** : Ordonnancement statique, multi-cœurs, fiabilité, taux de défaillance, température, puissance électrique, optimisation multi-critères, front de Pareto. \ No newline at end of file diff --git a/samples/texts/4011427/page_39.md b/samples/texts/4011427/page_39.md new file mode 100644 index 0000000000000000000000000000000000000000..38b307a37d68796d36ac028b2a61a30dfd455a9d --- /dev/null +++ b/samples/texts/4011427/page_39.md @@ -0,0 +1,9 @@ +# Contents + +
1Introduction4
2Pareto optimization6
3System model9
3.1Application and architecture models9
3.2Static mapping and scheduling10
3.3Reliability.10
3.4Power consumption.13
3.5Temperature14
4ERPOT: The Proposed Quad-Criteria Optimization Scheduling Heuristic Method16
4.1General principles of ERPOT16
4.2Quad-criteria scheduling heuristic algorithm.18
4.3Soundness of our scheduling heuristic.19
4.4Dealing with reactive systems20
4.5Taking into account the temperature of the adjacent cores22
4.6Integer Linear Program23
5Simulation results26
5.1Influence of the constraints on the schedules.26
5.2Pareto fronts obtained with ERPOT27
5.3Comparison with PowerPerf-PET.30
5.4Evaluation of the ILP model.31
6Related work32
7Conclusion34
+ +# 1 Introduction + +Multicores are widely used in modern safety critical embedded systems design. Their advantages over super-scalar processor architectures are lower power consumption, higher performance, and lower design complexity [1]. When designing safety critical applications, many non-functional criteria must be addressed. The most important ones are the *total execution time* (because these systems must react to inputs within a fixed delay), the *reliability* (because failures could have fatal consequences), the *power consumption* (to maximize the autonomy of the system when it operates on a battery), and the *temperature* (because of its negative influence on processing speed, reliability, and power consumption) [1, 2, 3, 4]. There are many real-life applications that motivate our study, including satellite systems, portable medical devices, and full authority digital engine control (FADEC) in aircraft. + +Considering these four criteria simultaneously during the design phase is very difficult because they are *antagonistic* [5, 1, 6, 2, 7, 4, 8, 9]. For instance, the total execution time and reliability are antagonistic because increasing the reliability requires some form of redundancy (be it spatial or temporal), which negatively impacts the execution time. Similarly, the execution time and the temperature are antagonistic because adding idle times to cool the cores obviously has a negative impact on the execution time. Finally, the execution time and the power consumption are antagonistic because reducing the power consumption requires lowering the operating voltage \ No newline at end of file diff --git a/samples/texts/4011427/page_4.md b/samples/texts/4011427/page_4.md new file mode 100644 index 0000000000000000000000000000000000000000..36ca5ea57dccdb4c3e55ed2c8f732b2db4f4988d --- /dev/null +++ b/samples/texts/4011427/page_4.md @@ -0,0 +1,30 @@ +**Algorithm 1** Grid method algorithm for 2 criteria. + +**input:** The range [$K_1^{min}$, $K_1^{max}$] and the decrement $\Delta$ + +**output:** The list of Pareto points *Res* + +1: **function** GRID($K_1^{min}$, $K_1^{max}$, $\Delta$) +2: *Res* ← ∅; $K_i^1$ ← $K_1^{max}$; *i* ← 1 +3: **while** $K_1^i \ge K_1^{min}$ **do** +4: *Res* ← *Res* ∪ OPT($K_1^i$) +5: $K_1^i$ ← $K_1^i$ − $\Delta$ +6: **end while** +7: **return** REMOVENONDOMINATEDPOINTS(*Res*) +8: **end function** + +between $Z_1$ and $Z_2$, because no Pareto optimum has been found that dominates $x_4$. + +# 3 System model + +## 3.1 Application and architecture models + +An application is modeled as a directed acyclic graph (DAG) $Alg = (\mathcal{V}, \mathcal{E})$, where $\mathcal{V}$ is the set of nodes and $\mathcal{E}$ is the set of edges. Each node represents a computing task, and each edge represents data-dependencies among two tasks. All tasks are assumed to be side-effect free (this assumption is required for active replication). If $X \rightarrow Y$ is a data-dependency, then X is predecessor of Y and Y is successor of X. X is called the source of the data-dependency and Y is called its destination. We also define the sets $pred(X) = \{Y | (Y, X) \in \mathcal{E}\}$ and $succ(X) = \{Y | (X, Y) \in \mathcal{E}\}$. Tasks with no predecessor are called input tasks, and those with no successors are called output tasks. + +Fig. 3(a) shows an example of a DAG with two input tasks ($I_1$ and $I_2$), one output task ($O_1$) and four regular tasks ($A$, $B$, $C$ and $D$). + +Figure 3: (a) A sample application graph. (b) A sample architecture graph. (c) The corresponding coarse grain floorplan. + +An *architecture* is a possibly heterogeneous multicore chip with one or more communication buses. It is modeled as a graph *Arc* = (*C*, *B*, *L*), where *C* is the set of cores, *B* is the set of communication buses, and each *e* ∈ *L* is a pair (*c*,*b*) ∈ *C* × *B* specifying that the core *c* is connected to the bus *b*. We assume that there exists a path between any two cores *c* and *c'*. An example of a target architecture made of four cores and one bus is shown in Fig. 3(b). + +We are also given a function $\mathcal{Exe}_{nom}$ that returns the nominal (corresponding to the highest frequency) worst case execution times (WCETs) of all the tasks of *Alg* onto all the cores of *Arc*, \ No newline at end of file diff --git a/samples/texts/4011427/page_40.md b/samples/texts/4011427/page_40.md new file mode 100644 index 0000000000000000000000000000000000000000..7e3e37523ff36e69ef5c1eb02cd37d12f7c1f5c0 --- /dev/null +++ b/samples/texts/4011427/page_40.md @@ -0,0 +1,17 @@ +and frequency of the cores, which increases the execution time. Those tradeoffs are easy to grasp (but difficult to address), but other tradeoffs are less obvious: for instance, lowering the operating voltage and frequency of a core (which lowers the power consumption) increases the nominal failure rate per time unit of this core. The reason is that the sensitivity of processors to energy particles leads to an increase of the failure rate at low voltage/frequency operating points [10, 11], because lowering the voltage decreases the critical charge of the circuit. As a consequence, the power consumption and the reliability are also antagonistic. Failing to take into account these antagonisms could result in bad design choices. + +These antagonisms call for the computation of as many tradeoffs as possible, rather than a single tradeoff, so that the user will have a choice. We must therefore produce a *set* of solutions in the 4-dimensions space (execution time, reliability, power consumption, temperature). We rely on the notion of *Pareto dominance*, and we use a variant of the $\varepsilon$-constraint method [12, 13] coupled with a scheduling algorithm that accounts for the four criteria to produce the *Pareto front* in this 4D space. More precisely, we transform three criteria into *constraints* (the reliability, the power consumption, and the peak temperature), and we *minimize* the fourth one (the execution time of the schedule) under these three constraints. + +Although several studies have addressed some of these parameters, none have considered these four criteria jointly in an optimization problem. For instance, some studies completely ignore the reliability [14, 4] or the temperature [2, 9]. Other studies tackle the problem as a hardware/software co-design problem, jointly optimizing the floorplan of the multicore and the schedule of the application task graph to minimize the peak temperature [14], but without considering the reliability. + +We therefore propose a static scheduling heuristic method called ERPOT, an acronym that stands for Execution time, Reliability, Power consumption and Temperature. Given an application modeled as a Directed Acyclic Graph (DAG) of tasks, a multicore architecture, and thresholds on the reliability, the power consumption, and the temperature, ERPOT generates a static schedule of this DAG onto this multicore such that each constraint is below its corresponding threshold, and such that the execution time is as small as possible. Each schedule is interpreted as a point in the 4D space (execution time, reliability, power consumption, temperature). By varying the values of the thresholds and calling iteratively ERPOT, we are able to produce a full Pareto front in this 4D space. + +The problem of scheduling a DAG of tasks onto a distributed architecture is known to be NP-complete [15], and so is the multi-criteria scheduling problem, which motivates the design of a heuristic algorithm. Additionally, we present an ILP program of the optimization problem, which is used to validate ERPOT (i.e., both algorithms produce the same schedule on the same problem instance) and to assess experimentally how good ERPOT is. Comparing the results of ERPOT with the optimal results obtained by the ILP program shows that the average difference is less than 10%. However, ERPOT is much faster than the ILP program, which fails to complete even for application graphs of relatively small sizes (8 tasks at most). + +The key contributions of this paper are: + +* The ERPOT quad criteria scheduling heuristic, which optimizes the *execution time*, the *reliability*, the *power consumption*, and the *temperature*. + +* A 4D variant of the $\varepsilon$-constraint method [12] to build the Pareto front of the solutions in the 4D space (execution time, reliability, power, temperature). + +* An ILP program of the quad criteria optimization problem to compare the solution computed by ERPOT with the optimal solution. \ No newline at end of file diff --git a/samples/texts/4011427/page_41.md b/samples/texts/4011427/page_41.md new file mode 100644 index 0000000000000000000000000000000000000000..5746a35927ce5325304b5e16e93bd6743658c313 --- /dev/null +++ b/samples/texts/4011427/page_41.md @@ -0,0 +1,17 @@ +ERPOT extends the heuristics proposed in [2] by taking into account the peak temperature. The first challenge of doing so lies in the intricate dependence of the temperature on the other criteria of [2], namely the failure rate, the power consumption, and the execution time. The second challenge is in the scheduling heuristic itself: each scheduling decision is made by “predicting” what will be the value of the temperature, power consumption, and failure rate at the end of the task being scheduled. However, the temperature varies during the execution of the task, because it obeys the classical thermal differential equation. Since the power consumption (and similarly the failure rate) depends on the temperature, the computation of the power consumption is inexact unless it is performed continuously during the execution of the task being scheduled, which is much too expensive. Addressing this challenge requires an over-approximation of the temperature and the proof that this is safe for the power-consumption constraint. This was not the case when only the power consumption and the failure rate were considered, making the scheduling heuristic of [2] much simpler. The third challenge resides in maintaining the peak temperature below a given threshold, which involves a combination of lowering the voltage/frequency (thanks to DVFS), inserting cooling intervals, and over-estimating the temperature when there are “holes” at the end of schedule under construction. A final contribution compared to [2] is that The ILP program of [2] does not consider the cost of the communications, while the ILP program of Section 4.6, so the comparison performed in Section 5.4 is more relevant than the one presented in [2]. + +The rest of this paper is organized as follows. Section 2 recalls the basics about Pareto dominance and how to compute the Pareto front with the $\epsilon$-constraint method. Section 3 provides the required preliminaries including the application and architecture models and the interplay between the reliability, the power consumption, the temperature, and the execution time. Section 4 provides the proposed scheduling heuristic ERPOT, along with its ILP counterpart. Section 5 presents the results of our simulations, performed both with syntactic benchmarks and with real-life benchmarks. Finally, Section 6 surveys the related work and Section 7 gives some concluding remarks. + +## 2 Pareto optimization + +Before detailing our problem formulation, solutions, and algorithms, we give foundational background on Pareto optimization. When optimizing more than one criterion, there can be several non-comparable solutions, e.g., (42, 13) versus (9, 78) in the case of two criteria that must be minimized. The principle of Pareto optimization is to explore the design space by providing as many solutions as possible, to study the tradeoffs between these solutions. To compare solutions, we rely on the notion of dominance and Pareto optima, presented below in the case of two criteria that must be minimized (see Fig. 1(a)): + +* The point $(x, y)$ weakly dominates the point $(x', y')$ iff $(x < x' \land y = y') \lor (x = x' \land y < y')$. E.g., $x_2$ weakly dominates $x_1$. + +* The point $(x, y)$ strongly dominates the point $(x', y')$ iff $(x < x' \land y < y')$. E.g., $x_3$ strongly dominates $y_1$. + +* A point is a weak Pareto optimum iff there does not exist another point that strongly dominates it. E.g., $x_1, ..., x_5$ are weak Pareto optima. + +* A point is a strong Pareto optimum iff there does not exist another point that dominates it (weakly or strongly). E.g., $x_2, ..., x_5$ are strong Pareto optima. + +* The Pareto front is the set of all weak and strong Pareto optima. \ No newline at end of file diff --git a/samples/texts/4011427/page_5.md b/samples/texts/4011427/page_5.md new file mode 100644 index 0000000000000000000000000000000000000000..3f8c70ae083f32fe5e663dcb2a82ebd5ce2e1bc0 --- /dev/null +++ b/samples/texts/4011427/page_5.md @@ -0,0 +1,43 @@ +as well as the worst case communication times (WCCTs) of all the data-dependencies of *Alg* onto +all the communication buses of *Arc*. An intra-core communication takes no time to execute. For +the sake of simplicity, all execution times are assumed to be integer numbers. + +Computing the WCET of a given task on a processor has been the topic of much work. It +involves finding the sequence of instructions in the program of the task that leads to the longest +execution time. This is achieved by extracting the control flow graph (CFG) of the program, +then by giving a duration (i.e., a number of clock cycles) to each basic block of the CFG. These +durations are computed based on a model of the micro-architecture of the processor. This steps +includes some pessimism because of the hardware abstraction, be it in the cache replacement +policy, the pipeline, the branch predictor, or the prefetch buffer. Based on this, the WCET is the +length of the most weighted path in the annotated CFG. In general, the CFG contains backward +edges, corresponding to the loops of the program. In this case, it is necessary to analyze the +program in order to bound the number of iterations of each loop, which is classically done with +abstract interpretation [18]. + +WCET analysis has been applied with success to real-life single-core processors actually used +in embedded systems, with branch prediction [19] or with caches and pipelines [20]. These +methods have later been adapted to multicores [21, 22, 23], taking into account the shared +resources in the multicore (e.g., the shared memory or the bus). + +Finally, the multicore is equipped with per-core DVFS. For each core, a set of (voltage,frequency) +pairs {($V_i$, $f_i$)$_{1 \le i \le l}$ is given. For the sake of simplicity, we assume that all the cores have the +same set of (voltage,frequency) pairs. The actual execution time of a task $\tau$ on a core $c$ de- +pends on the frequency $f$ (in contrast, the buses are assumed to run at a fixed frequency de- +noted $f_b$). To ease the computations, we transform the frequencies into scaling factors. E.g., +if the set of available frequencies is {900 MHz, 600 MHz, 300 MHz}, then we use the scaling +factors {$f_{max} = f_3 = 1$, $f_2 = \frac{2}{3}$, $f_{min} = f_1 = \frac{1}{3}$}. As a result, the WCET of task $\tau$ at frequency $f$ is +given by: + +$$ +\mathcal{E}xe(\tau, c, f) = [\mathcal{E}xe_{nom}(\tau, c)/f] \quad (1) +$$ + +where the [·] function guarantees that $\mathcal{E}xe$ always returns an integer number. + +## 3.2 Static mapping and scheduling + +The specifications of the system consists of *Alg*, *Arc*, and * nom with respect to *Arc* and one or several communication buses of *Arc* for each data-dependency: this is the *mapping*. During this phase, we take into account (i) the reliability constraint by choosing how many cores must execute each task, (ii) the power consumption constraint by choosing at what frequency/voltage each component (core or bus) should execute each task and data-dependency, and (iii) the temperature constraint by inserting cooling times whenever necessary. Second, we must compute the starting time for each pair (task,proc) and each pair (data dep.,bus): this is the *scheduling*. This paper solves these two steps *statically*, i.e., at compile time, based on a ready list scheduling heuristic. Finally, as said in the introduction, we schedule under constraints on the failure rate, the power consumption, and the temperature. We note respectively $\Lambda_{obj}$, $P_{obj}$, and $T_{obj}$ these constraints. + +**3.3 Reliability** + +Both the cores and the buses are assumed to be *fail-silent*. Classically, we adopt the failure model of Shatz and Wang [24]: failures are *transient*, and the maximal duration of a failure is \ No newline at end of file diff --git a/samples/texts/4011427/page_6.md b/samples/texts/4011427/page_6.md new file mode 100644 index 0000000000000000000000000000000000000000..22296cd9fc34f83b193431e04d73fa4816209bed --- /dev/null +++ b/samples/texts/4011427/page_6.md @@ -0,0 +1,28 @@ +such that it affects only the current task executing onto the faulty core and not the subsequent +tasks (same for the buses); this is known as the “hot” failure model. + +Since the real-time systems we target are safety critical, the occurrence of failures is not acceptable and their reliability must be as close as possible to 1. One of the main causes of system failure are transient failures [25], which are commonly modeled by a Poisson distribution with a constant rate denoted $\lambda$ [26]. Accordingly, the reliability of a single task or data-dependency $\tau$ mapped onto a hardware component $c$ (either a core or a bus) running at frequency $f$ is: + +$$R(\tau, c, f) = e^{-\lambda_c \cdot \mathcal{E}xe(\tau, c, f)} \quad (2)$$ + +where $\lambda_c$ is the *failure rate per time unit* of the hardware component $c$, and $\mathcal{E}xe(\tau, c, f)$ is the execution time of $\tau$ on $c$ at frequency $f$, computed with Eq. (1). When $\tau$ is not replicated, we use Eq.(2). When $\tau$ is actively replicated on a set $K$ of $k$ hardware components numbered $\{c_i\}_{1 \le i \le k}$, each of them operating at frequency $f_{c_i}$, its reliability is: + +$$R(\tau, K) = 1 - \left( \prod_{i=1}^{k} \left( 1 - e^{-\lambda_{c_i} \cdot \mathcal{E}xe(\tau, c_i, f_{c_i})} \right) \right) \quad (3)$$ + +However, because of the operating frequency $f$, $\lambda$ is not constant anymore but is instead a function of the frequency [10]: + +$$\lambda_f = \lambda_0 \cdot \rho_f \quad \text{with} \quad \rho_f = 10^{\frac{b(1-f)}{1-f_{min}}} \qquad (4)$$ + +where $\lambda_0$ is the nominal failure rate per time unit, $\rho_f$ is the frequency-dependent factor, $b$ is a strictly positive constant that accounts for the susceptibility of hardware to transient faults due to frequency scaling, $f$ is the operational frequency level, and $f_{min}$ is the lowest frequency of the system. Recall that the frequency value $f$ is normalized in the range [0, 1] with $f_{max} = 1$. This is consistent with Eq (1). + +Many articles have studied the impact of the temperature on the rate of transient faults [27, 28, 29]. In addition, there are several mechanisms that lead to permanent failures, most notably electro-migration, negative bias temperature instability, stress migration, time-dependent dielectric breakdown, and thermal cycling [30, 3]. All of these phenomena can be characterized by a failure rate as an exponential function of the temperature. We take into account the effect of the temperature on the failure rate per time unit with the Arrhenius equation [3]: + +$$\lambda_T = \lambda_0 \cdot \rho_T \quad \text{with} \quad \rho_T = e^{\frac{-E_a}{K} \left( \frac{1}{T(t)} - \frac{1}{T_0} \right)} \qquad (5)$$ + +where again $\lambda_0$ is the nominal failure rate per time unit, $\rho_T$ is the temperature-related factor, $E_a$ is the activation energy, $K$ is the Boltzmann's constant, $T(t)$ is the temperature of the system at time $t$ in Kelvin, and $T_0$ is the initial temperature. Of course, we will also have to take into account the effect of each core's temperature on the other cores (see Section 3.5). + +Finally, we combine Eqs (4) and (5) to provide a global equation of the failure rate per time unit as a function of the frequency and the temperature. Since the frequency factor $\rho_f$ and the temperature factor $\rho_T$ are both dimension-less, the dimension of $\lambda_{sys}$ is the same as $\lambda_0$, hence $\lambda_{sys}$ is also a failure rate per time unit: + +$$\lambda_{sys} = \lambda_0 \cdot \rho_f \cdot \rho_T = \lambda_0 \cdot 10^{\frac{b(1-f)}{1-f_{min}}} \cdot e^{\frac{-E_a}{K} \left( \frac{1}{T(t)} - \frac{1}{T_0} \right)} \quad (6)$$ + +When computing the reliability of a given task or data-dependency $\tau$ on a single hardware component $c_i$ (resp. a set $K = \{c_i\}_{1 \le i \le k}$), we therefore use Eq. (2) (resp. Eq. (3)) by replacing \ No newline at end of file diff --git a/samples/texts/4011427/page_7.md b/samples/texts/4011427/page_7.md new file mode 100644 index 0000000000000000000000000000000000000000..bd59b2b783e77ff350cb403d7d37bb5000e76c3f --- /dev/null +++ b/samples/texts/4011427/page_7.md @@ -0,0 +1,19 @@ +$\lambda_{c_i}$ by $\lambda_{sys}(c_i):$ + +$$R(\tau, c_i, f_{c_i}, t) = e^{-\lambda_{sys}(c_i) \cdot \mathcal{E}xe(\tau, c_i, f_{c_i})} \qquad (7)$$ + +$$R(\tau, K, t) = 1 - \left( \prod_{i=1}^{k} \left(1 - e^{-\lambda_{sys}(c_i) \cdot \mathcal{E}xe(\tau, c_i, f_{c_i})}\right) \right) \qquad (8)$$ + +where $t$ is shown to make explicit the dependency of the temperature of $c_i$ on the time in $\lambda_{sys}(c_i)$. In the entire paper, we take the temperature at the *task granularity*, i.e., we assume that $T(t)$ remains constant for the entire duration of $\tau$. We will prove at the end of this section that doing so is safe for the $\Lambda_{obj}$ constraint. + +It has been demonstrated in [5, 2] that using the reliability as a constraint in the $\epsilon$-constraint method does not work. Intuitively, this is because the reliability is not an invariant measure of the number of scheduled tasks. Indeed, computing the reliability of a schedule involves, at each mapping decision, a multiplication by a factor that is strictly less than 1: see Eq.(3). This is illustrated in Fig. 4(a), where the horizontal axis counts the task numbers in their mapping order (recall that we use a ready list scheduling algorithm). As long as the reliability is above the threshold $R_{obj}$, the tasks are not replicated, because this is what minimizes the schedule length; thus the replication level of tasks 1 to 4 is 1 (red dashed line). This results in a multiplicative factor significantly below 1, which causes the system's reliability to drop (blue solid line). Once task 4 has been scheduled, the reliability is very close to $R_{obj}$; this causes the replication level to skyrocket up to a value sufficient for the multiplying factor to be close enough to 1, so that the system's reliability remains above $R_{obj}$. We call this the “funnel effect” [2]. + +Figure 4: Funnel effect: (a) when using the reliability, (b) when using the energy. + +For this reason, instead of the reliability, we use the Global System Failure Rate (GSFR) [5]. Intuitively, the GSFR of a possibly partial schedule is the failure rate of the system operating under this schedule as if it was a single task mapped on a single core. As a consequence, we schedule under a constraint $\Lambda_{obj}$ on the GSFR instead of a constraint $R_{obj}$ on the reliability. For a single task $\tau$, the GSFR is denoted $\Lambda(\tau)$ and is computed as: + +$$\Lambda(\tau, c, f, t) = \frac{-\log(R(\tau, c, f, t))}{\mathcal{E}xe(\tau, c, f)} \qquad (9)$$ + +And for a schedule $S$, the GSFR $\Lambda(S)$ is computed as: + +$$\Lambda(S) = \frac{-\log(R(S))}{U(S)} \quad \text{with} \quad U(S) = \sum_{(\tau,c,f) \in S} \mathcal{E}xe(\tau,c,f) \qquad (10)$$ \ No newline at end of file diff --git a/samples/texts/4011427/page_8.md b/samples/texts/4011427/page_8.md new file mode 100644 index 0000000000000000000000000000000000000000..3173f8fd20444069277829d3dcd3a1647dced54a --- /dev/null +++ b/samples/texts/4011427/page_8.md @@ -0,0 +1,35 @@ +where $R(S)$ is the reliability of the schedule $S$ and $U(S)$ is the overall sum of the execution times of the cores in $S$. The notation $(\tau, c, f) \in S$ means that, in the schedule $S$, task $\tau$ is executed on core $c$ at frequency $f$. Eq. (10) is equivalent to $R(S) = e^{-\Lambda(S) \cdot U(S)}$, which is the same as Eq. (2) but for a schedule $S$ instead of a single task $\tau$. + +One key aspect of Eq. (10) is that it uses $U(S)$ and not the schedule length. There are two reasons behind this choice: first it makes the computation of the GSFR *compositional* with respect to the structure of the schedule, and second it is consistent with the “hot” failure model [5]. + +The consequence of this shift from the reliability to the GSFR is that, from now on, our state space will be the 4D space (execution time, GSFR, power, temperature). + +We are now ready to prove that assuming the temperature on each core $c_j$ and on the bus $b$ to remain constant during the duration of each task/data-dependency $\tau$ is safe w.r.t. the $\Lambda_{obj}$ constraint. + +**Proposition 1** Let $\tau$ be a task or a data-dependency scheduled on a hardware component $c$ at frequency $f$, starting at time $t_0$ and finishing at time $t_f = t_0 + \mathcal{E}xe(\tau, c, f)$. The reliability of $\tau$ on $c$ is computed with Eq. (7) and the GSFR with Eq. (9). (i) If the temperature increases over the interval $[t_0, t_f]$, then fixing $T(t) = T(t_f)$ is safe regarding the $\Lambda_{obj}$ constraint. (ii) If the temperature decreases over the interval $[t_0, t_f]$, then fixing $T(t) = T(t_0)$ is safe regarding the $\Lambda_{obj}$ constraint. + +**Proof:** (i) In the heating mode, the temperature increases during the execution of $\tau$, and when it does, $\lambda_{sys}(c)$ increases too. Since $R$ is decreasing in function of $\lambda_{sys}$, we have: + +$$ \forall t \in [t_0, t_f], R(\tau, c, f, t) \geq R(\tau, c, f, t_f) $$ + +Since $R(\tau, c, f, t) \geq R(\tau, c, f, t_f) \iff \Lambda(\tau, c, f, t) \leq \Lambda(\tau, c, f, t_f)$, we therefore have: + +$$ \Lambda(\tau, c, f, t_f) \leq \Lambda_{obj} \implies \forall t \in [t_0, t_f], \Lambda(\tau, c, f, t) \leq \Lambda_{obj} $$ + +which proves that assuming that $T(t)$ remains constant and equal to $T(t_f)$ is safe regarding the $\Lambda_{obj}$ constraint. + +(ii) In the cooling mode, the proof is identical since $T(t)$ decreases so $\lambda_{sys}(t)$ decreases, hence assuming that $T(t)$ remains constant and equal to $T(t_0)$ is safe regarding the $\Lambda_{obj}$ constraint. $\square$ + +## 3.4 Power consumption + +The power consumption of a single task (or data-dependency) running on a hardware component is composed of two aspects [10, 31]: (i) the leakage power and (ii) the dynamic power. The former depends on the leakage current, which itself mostly depends on the chip temperature, while the latter depends on the chosen pair (voltage $V$, frequency $f$). The overall power consumption $P_{sys}$ is equal to $P_{leak} + P_{dyn}$, computed by Eq. (11): + +$$ +\begin{cases} +P_{sys}(t) = \alpha \cdot T(t) + \beta_h + \gamma \cdot C_{ef} \cdot V^2 \cdot f & \text{if heating} \\ +P_{sys}(t) = \alpha \cdot T(t) + \beta_c + \gamma \cdot C_{ef} \cdot V^2 \cdot f & \text{if cooling} +\end{cases} +\quad (11) +$$ + +Regarding the leakage power, $\alpha$, $\beta_h$, and $\beta_c$ are architecture-dependent coefficients and are determined based on the characteristics of the platform; $\beta_h$ is used in the heating mode and $\beta_c$ in the cooling mode [8]. Finally, $T(t)$ is the chip temperature at time $t$, in Kelvin. Regarding the dynamic power, $V$ is the supply voltage, $f$ is the frequency, $C_{ef}$ is the switching capacitance (a constant that depends on the chip technology), and $\gamma$ is the activity ratio, which varies from \ No newline at end of file diff --git a/samples/texts/4011427/page_9.md b/samples/texts/4011427/page_9.md new file mode 100644 index 0000000000000000000000000000000000000000..9a07eed724d29c69b4501c71a4776b5da340e051 --- /dev/null +++ b/samples/texts/4011427/page_9.md @@ -0,0 +1,25 @@ +0 (no activity) to 1 (all gates are active at each cycle). In theory, there should be a different $\gamma$ for each task, and our scheduling algorithm can handle it. In practice, for the sake of simplicity we take an average $\gamma$ value, identical for all the tasks. + +Recall that we take the temperature at the *task granularity*, i.e., we assume that $T(t)$ remains constant for the entire duration of $\tau$. The following property states that doing this is safe regarding the $P_{obj}$ constraint. + +**Proposition 2** Let $\tau$ be a task or a data-dependency scheduled on a hardware component c at frequency f, starting at time $t_0$ and finishing at time $t_f = t_0 + \mathcal{E}xe(\tau, c, f)$. The power consumption of c during the execution of $\tau$ is computed with Eq. (11). (i) If the temperature increases over the interval $[t_0, t_f]$, then fixing $T(t) = T(t_f)$ is safe regarding the $P_{obj}$ constraint. (ii) If the temperature decreases over the interval $[t_0, t_f]$, then fixing $T(t) = T(t_0)$ is safe regarding the $P_{obj}$ constraint. + +**Proof:** (i) In the heating mode, the temperature increases during the execution of $\tau$, and when it does, $P_{sys}(t)$ increases too. It follows that assuming $T(t)$ to remain constant over the interval $[t_0, t_f]$ and equal to $T(t_f)$ yields $\forall t, P_{sys}(t) \le P_{sys}(t_f)$. Therefore, we have: + +$$P_{sys}(t_f) \le P_{obj} \implies \forall t \in [t_0, t_f], P_{sys}(t) \le P_{obj}$$ + +which proves that assuming that $T(t)$ remains constant and equal to $T(t_f)$ is safe regarding the $P_{obj}$ constraint. + +(ii) In the cooling mode, the proof is identical since $T(t)$ decreases so $P_{sys}(t)$ decreases, hence assuming that $T(t)$ remains constant and equal to $T(t_0)$ is safe regarding the $P_{obj}$ constraint. $\square$ + +From Eq. (11), we can then compute the energy consumed by the system when executing a schedule (possibly partial). However, the same funnel effect as with the reliability occurs if one uses the energy as a constraint in the $\epsilon$-constraint method [2]. The reason again is that the energy is not an invariant measure of the number of scheduled tasks. Indeed, computing the energy consumed by a schedule involves, at each mapping decision, an addition of a term that is strictly positive. This is illustrated in Fig. 4(b): the horizontal axis counts the task numbers in their mapping order; the blue solid line depicts the cumulative energy consumed by the system; up to task 6, the energy is below the energy constraint $E_{obj}$ so everything is fine; however, there is no possibility to schedule task 7 without violating the energy constraint. For this reason, in our multi-criteria scheduling heuristic we use the power consumption, with a constraint $P_{obj}$, which is an invariant measure of the number of scheduled tasks. + +## 3.5 Temperature + +The instantaneous temperature of a computing system depends on the power consumption and on the current temperature (and its variations in time). For a given hardware component c (core or bus), it is computed based on the following differential equation [32]: + +$$C \cdot \left( \frac{dT_c(t)}{dt} \right) + G(T_c(t) - T_{amb}) = P(t) \quad (12)$$ + +where C and G are the architecture-based constants for the heat conductivity, $T_c$, t, $T_{amb}$, and P are respectively the temperature of c, the time, the ambient temperature (assumed to be less than $T_{obj}$¹), and the instantaneous power consumption of the system. The power consumption is the sum of the static and dynamic power, as given by Eq. (11). + +¹If $T_{amb} > T_{obj}$, then putting the component in the idle mode does not allow it to cool down. \ No newline at end of file diff --git a/samples/texts/5195943/page_1.md b/samples/texts/5195943/page_1.md new file mode 100644 index 0000000000000000000000000000000000000000..237428e958ee9564f465ad111b5f68aa6894092d --- /dev/null +++ b/samples/texts/5195943/page_1.md @@ -0,0 +1,28 @@ +# The competition between simple and complex evolutionary trajectories in asexual populations + +Ian E Ochs and Michael M Desai* + +**Abstract** + +**Background:** On rugged fitness landscapes where sign epistasis is common, adaptation can often involve either individually beneficial "uphill" mutations or more complex mutational trajectories involving fitness valleys or plateaus. The dynamics of the evolutionary process determine the probability that evolution will take any specific path among a variety of competing possible trajectories. Understanding this evolutionary choice is essential if we are to understand the outcomes and predictability of adaptation on rugged landscapes. + +**Results:** We present a simple model to analyze the probability that evolution will eschew immediately uphill paths in favor of crossing fitness valleys or plateaus that lead to higher fitness but less accessible genotypes. We calculate how this probability depends on the population size, mutation rates, and relevant selection pressures, and compare our analytical results to Wright-Fisher simulations. + +**Conclusion:** We find that the probability of valley crossing depends nonmonotonically on population size: intermediate size populations are most likely to follow a "greedy" strategy of acquiring immediately beneficial mutations even if they lead to evolutionary dead ends, while larger and smaller populations are more likely to cross fitness valleys to reach distant advantageous genotypes. We explicitly identify the boundaries between these different regimes in terms of the relevant evolutionary parameters. Above a certain threshold population size, we show that the probability that the population finds the more distant peak depends only on a single simple combination of the relevant parameters. + +**Keywords:** Epistasis, Rugged fitness landscape, Fitness valley + +**Background** + +In an adapting population, evolution often has the potential to follow many distinct mutational trajectories. In order to predict how the population will adapt, we must understand how evolution chooses among these possibilities. Many experimental and theoretical studies have analyzed this question, focusing primarily on the simple case where epistasis is absent, so that each mutation has some fixed fitness effect [1-6]. This work can explain the probability that a given mutation will fix as a population adapts, as a function of its fitness effect, the population size, mutation rate, distribution of fitness effects of + +other mutations, and other parameters of the evolutionary process. + +However, the fitness effect of a mutation often depends on the genetic background in which it occurs. A particularly interesting form of this phenomenon, *sign epistasis*, occurs when several mutations are individually neutral or deleterious but their combination is beneficial [7]. Sign epistasis has been observed repeatedly in experiments [8-13], and plays a central role in the evolution of complex phenotypes that involve multiple interacting components. When sign epistasis is present, adaptation can involve passing through genotypes of lower fitness — i.e. a population may have to cross a fitness valley or plateau. Thus the fate of a mutation depends not only on its fitness, but also on its adaptive potential [14]. + +Several recent theoretical studies have analyzed the evolutionary dynamics of fitness valley crossing [15-20]. This + +*Correspondence: mmdesai@fas.harvard.edu +Department of Organismic and Evolutionary Biology, Department of Physics, +and FAS Center for Systems Biology, Harvard University, 02138 Cambridge, MA, +USA \ No newline at end of file diff --git a/samples/texts/5195943/page_2.md b/samples/texts/5195943/page_2.md new file mode 100644 index 0000000000000000000000000000000000000000..cd9720e7068fb4543ee8cfd099edc388a7b2a1dc --- /dev/null +++ b/samples/texts/5195943/page_2.md @@ -0,0 +1,25 @@ +work has focused on calculating the rate at which adapting populations cross a valley or plateau, in the absence of any other possible mutational trajectories. However, individually beneficial mutations may often compete with more complex evolutionary trajectories. We must then ask how likely evolution is to eschew the immediately uphill paths, and instead cross valleys or plateaus to reach better but less accessible genotypes. In other words, when the fitness landscape is rugged, we wish to understand whether evolution will take the more “farsighted” path to reach distant advantageous genotypes, rather than a “greedy” trajectory that fixes immediately beneficial mutations regardless of whether these may lead to evolutionary dead ends. + +In this article, we analyze this evolutionary choice between immediately beneficial mutations and more complex mutational trajectories that ultimately lead to higher fitness. We calculate the probability that an adapting population will follow each type of competing trajectory, as a function of the population size, mutation rates, and selection pressures. We focus on asexual populations, where the only way for a population to acquire a complex adaptation is for a single lineage to acquire each mutation in turn. Our analysis is similar in spirit to earlier work which also considered the tradeoff between short-term and long-term fitness advantages [21-24]. However, these earlier studies dealt with competition between different strictly uphill or neutral paths, and considered the case where the less beneficial initial mutation led to better long-term evolutionary opportunities. In contrast, our analysis describes the competition between uphill mutations and more complex trajectories. While these two cases can be qualitatively similar in very small populations, they lead to very different dynamics in larger populations where the sign of the effect of the intermediate mutation can play a crucial role. + +Our results show that population size has a crucial impact on how “farsighted” evolution can be. This dependence is not monotonic: evolution at intermediate population sizes is most “greedy”, while both larger and smaller populations are more likely to eschew uphill paths in favor of complex trajectories. In large populations, our results show that a single parameter reliably predicts the extent of this evolutionary “foresight” across a wide range of parameters. Finally, we describe how our analysis can be generalized to predict how evolution will choose among even more complex trajectories, such as broad fitness valleys with multiple intermediate genotypes, and we discuss evolution in genotype spaces with many possible evolutionary paths. + +## Methods + +We are interested in how a population makes an evolutionary choice when confronted with multiple possible mutational trajectories. Specifically, we focus on + +the extent to which adaptation proceeds by crossing fitness valleys rather than acquiring immediately beneficial (uphill) mutations. Of course, the relative frequency of valley crossing will depend on the number of available fitness valleys, their depth, and the fitness advantage of the multiple-mutants, as well as the distribution of fitness effects (DFE) of the uphill mutants. Our goal is to understand how the prevalence of valley crossing depends on these factors. + +### Model + +Throughout most of this article, we consider the simplest context in which we can address this question: the choice between a single uphill path and a single fitness valley. Specifically, we consider a haploid asexual population of constant size *N* which can either acquire an uphill mutation (*u*) that confers an immediate fitness advantage $s_u$, or alternatively acquire a deleterious fitness valley intermediate (*i*) with fitness deficit $\delta_i$ on which background a double-mutant (*v*) with fitness $s_v > s_u$ can arise. This scenario is illustrated in Figure 1. We also consider the case of a fitness plateau, where $\delta_i = 0$. + +Because we are interested in the evolutionary choice between competing mutational trajectories, we assume that these two trajectories are mutually exclusive (i.e. the mixed genotypes *ui* and *uv* are strongly deleterious), so that only one genotype (either *u* or *v*) can eventually fix in the population. As a measure of evolutionary foresight, we analyze the probability that the double-mutant *v* fixes as a function of the relevant mutation rates, selection coefficients, and population size. In some situations, we could imagine that after either genotype *u* or *v* fixes, another set of competing potential trajectories become available. In this case, our analysis predicts the long-term relative ratio of fixed uphill versus valley-crossing mutations. In the Discussion, we consider how this model can be extended to the situation where there are many different competing uphill paths and valleys, and to broader fitness valleys involving multiple intermediate genotypes. + +### Simulations + +In addition, we compare our analytical predictions for valley crossing probability to Wright-Fisher simulations. Each simulated population was evolved until either the uphill genotype or valley-crossing genotype fixed. Valley crossing probabilities were then inferred from the number of trials in which the valley-crossing genotype fixed, out of 1000 trials per parameter set. + +### Results + +In the absence of the uphill genotype, fitness valley crossing can be modeled as a homogeneous Poisson process with rates as calculated by [17]. In small populations, the primary role of the uphill genotype is to introduce an \ No newline at end of file diff --git a/samples/texts/5195943/page_3.md b/samples/texts/5195943/page_3.md new file mode 100644 index 0000000000000000000000000000000000000000..3d2ad03dd7b79162528c348c71bc13fd4bdb00d8 --- /dev/null +++ b/samples/texts/5195943/page_3.md @@ -0,0 +1,51 @@ +**Figure 1** Model and characteristic trajectories. (a) The model to study fitness valley crossing prevalence. The population starts as wild type (w), and then acquires uphill mutations (u) at rate $\mu_u$ that confer an immediate fitness advantage $s_u$, and acquires deleterious fitness valley intermediates (i) at rate $\mu_i$ with fitness deficit $\delta_i$ on which background double-mutants (v) with fitness $s_v > s_u$ arise at rate $\mu_v$. (b)-(e) The four main forms of fitness valley crossing. (b) Small populations are characterized by low genetic diversity and strong genetic drift, leading sequential fixation of intermediates to dominate the dynamics. (c) For larger populations, genetic diversity is maintained longer, and double mutants will tend to arise on transient single-mutant backgrounds, in a process known as stochastic tunneling. (d) If the drift time is small compared to the maximal rate of change in background fitness, we can approximate the drift time of the intermediate by its expectation, dramatically simplifying the mathematical analysis. (e) For very large populations, we can treat single-mutants deterministically, in a process dubbed semi-deterministic tunneling. + +effective *time limit* on this process: once an uphill muta- +tion destined to survive drift first occurs, it very quickly +fixes, leading to the extinction of the wild-type. The prob- +ability of valley-crossing can thus be calculated as the +probability that the intermediate *i* fixes before the uphill +genotype *u*. An example of this is shown in Figure 1b. + +In larger populations, the dynamics are more complex, +as illustrated in Figure 1c. Rather than leading to a single +cutoff time for valley-crossing to occur, the single-mutant +occurs and gradually increases in frequency. This leads to +a decline in the size of the wild-type background on which +intermediate and valley-crossing mutants can arise, and a +corresponding increase in the mean fitness of the popula- +tion (Figure 1c). These effects gradually reduce the rate at +which intermediates are produced, and make these inter- +mediates effectively more deleterious relative to the mean +fitness. These factors reduce the rate of the valley-crossing +process. Thus valley-crossing becomes an *inhomogenous* +Poisson process, with a rate that depends on the random +appearance time Tu of the uphill mutant. + +In general, these effects of interference and tunneling +are complex. However, the analysis becomes simpler in + +two specific regimes. When the expected drift time of +the intermediate genotype is short, we can neglect the +changing background fitness due to the uphill mutant +during this drift time (Figure 1d). Alternatively, for very +large populations (Nµ > 1), the Poisson process approx- +imation breaks down and both uphill and intermediate +mutations can be treated deterministically (Figure 1e), and +only the valley-crossing genotype must be treated stochas- +tically. + +These various regimes are illustrated in Figure 2. We +now analyze each in turn, assuming weak selection (sj < +1) for all genotypes throughout. Taken together, this pro- +vides a complete picture of the probability that evolution +will eschew the immediately uphill path in favor of the +more complex adaptation. + +Small populations + +When the population size is small enough that the proba- +bility of stochastic tunneling is very low, the population is +generally clonal or nearly clonal, and moves in Markovian +jumps between neighboring genotypes. The transition +between genotypes *i* and *j* occurs at rate \ No newline at end of file diff --git a/samples/texts/5195943/page_4.md b/samples/texts/5195943/page_4.md new file mode 100644 index 0000000000000000000000000000000000000000..f9d1decc2784fd1f8119bb3450c5b0e785bf083e --- /dev/null +++ b/samples/texts/5195943/page_4.md @@ -0,0 +1,43 @@ +**Figure 2 Regimes of valley crossing.** Phase plot summarizing the different regimes of fitness valley crossing. (a) The regime boundaries in terms of population size *N* and uphill fitness *s*u. (b) The boundaries in terms of population size *N* and intermediate deleteriousness *δ*i. + +$$ +r_{ij} = N\mu_{ij}\pi_{ij}, \tag{1} +$$ + +where $\pi_{ij}$ is the probability that a single j mutant will give rise to a lineage that fixes, given by the standard formula, + +$$ +\pi_{ij} = \frac{1 - e^{-2(s_j - s_i)}}{1 - e^{-2N(s_j - s_i)}}. \quad (2) +$$ + +We refer to this as the sequential fixation regime. +Because we are considering neutral and weakly dele- +terious intermediates, we account for the possibility of +back-mutation to the wild type if the intermediate fixes. +Therefore, the process can be modeled as an absorb- +ing states Markov chain, where the wild type and inter- +mediates act as transient states, and the uphill and +double-mutant genotypes act as absorbing states. From +elementary Markov chain theory, we find + +$$ +P_{\text{cross}} = \frac{r_{wi}r_{iv}}{r_{wi}r_{iv} + r_{wu}(r_{iw} + r_{iv})} = \left[ 1 + \left( \frac{\pi_{wu}}{\pi_{wi}} \right) \left( \frac{\mu_u}{\mu_i} \right) \left( 1 + \frac{\mu_i \pi_{iw}}{\mu_v \pi_{iv}} \right) \right]^{-1}. \quad (3) +$$ + +As the population size increases, $\pi_{wu} \to 2s_u$ and $\pi_{wi} \to 0$, so $\frac{\pi_{wu}}{\pi_{wi}} \to \infty$, and $P_{\text{cross}} \to 0$. Thus we find that within the sequential fixation regime, larger population sizes are less likely to cross fitness valleys. + +**Stochastic tunneling** + +For large populations, the probability that deleterious intermediates will fix declines drastically, and successful double mutants will instead arise on the unfixed single-mutant background in a process known as *stochastic tunneling* [16]. This transition occurs when + +$$ +N > \frac{1}{2\delta_i} \log \left[ 1 + \frac{\exp(2\delta_i) - 1}{p_v} \right], \quad (4) +$$ + +where $p_v$ is the probability that the intermediate lineage survives drift long enough to give rise to an ultimately successful double mutant lineage (we will explicitly calculate this probability below). We can then model the appearance of an intermediate mutant lineage destined to give rise to a double mutant lineage as a Poisson process. The rate $\lambda_v$ at which these lineages appear is given by the rate at which intermediate mutations arise times the probability of success of the lineage, integrated over the drift time $t_d$ after appearance of the single-mutant intermediate: + +$$ +\lambda_v = N_{wt}\mu_i \int_0^\infty \frac{\partial p_i(t_d)}{\partial t_d} dt_d. \qquad (5) +$$ + +Here $N_{wt}$ is the wild-type population size, and $p_i(t_d)$ is the cumulative probability that a single-mutant lineage \ No newline at end of file diff --git a/samples/texts/5195943/page_5.md b/samples/texts/5195943/page_5.md new file mode 100644 index 0000000000000000000000000000000000000000..4c33a74c3bd46fa535842ac411f40334b6fd52be --- /dev/null +++ b/samples/texts/5195943/page_5.md @@ -0,0 +1,49 @@ +will give rise to a successful double-mutant lineage by time $t_d$ after it appears. This probability is given by [17]: + +$$p_i(t_d) = 2 \frac{(a_+ - 1)(1 - a_-)(1 - \exp[-(1 - \delta_{i,eff})(a_+ - a_-)t_d])}{a_+ - 1 + (1 - a_-)(1 - \exp[-(1 - \delta_{i,eff})(a_+ - a_-)t_d])}, \quad (6)$$ + +where + +$$a_{\pm} = \frac{2 - \delta_{i,\text{eff}} - \mu_v s_{v,\text{eff}} \pm \sqrt{(\delta_{i,\text{eff}} + \mu_v s_{v,\text{eff}})^2 + 4 \mu_v s_{v,\text{eff}}}}{2(1 - \delta_{i,\text{eff}})}, \quad (7)$$ + +and $\delta_{i,eff}$ and $s_{v,eff}$ are the fitnesses of the intermediate and valley-crossing genotypes relative to the (time-dependent) mean population fitness. These effective fitnesses and $N_{wt}$ depend on the background at time $t_d$ in a way we must now consider. The background in turn is determined by the frequency $f_u(T_v + t_d)$ of the uphill genotype at time $t_d$ after the appearance of the first single-mutant intermediate lineage destined for success at time $T_v$ (Figure 1c). Thus the uphill genotype frequency sets the fitness background on which valley-crossing probabilities are determined. + +To calculate the relevant effective parameters, we note that the appearance of uphill lineages destined for success can be modeled as a Poisson process. Moreover, because the valley-crossing genotypes make up a tiny fraction of the population (unless the double-mutant has already established), we can treat the genetic background on which these uphill lineages appear as essentially fixed. Therefore, the first uphill lineage destined to survive genetic drift will appear at time $T_u$, distributed exponentially with rate + +$$\lambda_u = N\mu_u\pi_{wu} \approx N\mu_u(2s_u). \quad (8)$$ + +Once a successful uphill lineage appears, we assume it establishes in time $\tau_{est} = \gamma_e/(2s_u)$, where $\gamma_e \approx .577$ is the Euler-Mascheroni constant [25], and then sweeps deterministically according to + +$$f_u(\hat{t}) = \frac{1 - \exp[-(\mu_u + s_u)\hat{t}]}{1 + (s_u/\mu_u) \exp[-(\mu_u + s_u)\hat{t}]}, \quad (9)$$ + +where $\hat{t} \equiv t - T_u - \tau_{est}$ is the time after establishment of the uphill mutant. + +Conditioning on the appearance time $T_u$, we can thus work out our effective parameters + +$$N_{\text{wt}}(t | T_u) = N(1 - f_u(t - T_u - \tau_{\text{est}})) \quad (10)$$ + +$$\delta_{i,\text{eff}}(t + t_d | T_u) = -\delta_i + f_u(t + t_d - T_u - \tau_{\text{est}})s_u \quad (11)$$ + +$$s_{v,\text{eff}}(t + t_d | T_u) = s_v - f_u(t + t_d - T_u - \tau_{\text{est}})s_u. \quad (12)$$ + +These effective parameters encompass the two main effects of the sweeping uphill mutation on the valley crossing probability: the first represents the declining wild-type + +background on which new mutations can arise, and the remaining two represent the decreasing relative fitness of the valley-crossing lineage. + +We are interested in the probability that a double-mutant lineage destined for success appears before the uphill genotype fixes. Integrating over all possible appearance times $T_u$, this is given by: + +$$P_{\text{cross}} = \int_0^\infty dt_u (\lambda_u e^{-\lambda_u t_u}) \times \left[ 1 - \exp\left[-\int_0^\infty dt (N_{\text{wt}}(t, t_u)\mu_i \int_0^\infty dt_d \frac{\partial p_i(t+t_d, t_u)}{\partial t_d})\right] \right]. \quad (13)$$ + +This integral is a complete solution for the probability of valley-crossing in the stochastic tunneling regime, provided that the population size is small enough that the Poisson process approximation above holds. Although it does not have a simple closed-form solution, we can easily evaluate the integral numerically. Alternatively, there is a simple and relevant parameter regime in which background fitnesses change slowly. We now consider this case, and show that it allows us to evaluate our expression for the valley-crossing probability explicitly. In a later section below, we turn to the alternative case where the Poisson process approximation breaks down, and we can instead treat all single-mutants deterministically; the valley-crossing probability also simplifies considerably in this very large population regime. + +## Slowly changing background fitness + +One of the main complications of Equation (13) is the integral over possible drift times $t_d$, which reflects the increasing effective deleteriousness of the intermediate genotype as the uphill genotype sweeps to fixation and increases the mean fitness of the background population. However, when $s_u$ is small or $\delta_i$ is large, this background fitness changes slowly compared to the intermediate drift time. In this case, we can treat the background during intermediate drift as effectively constant (Figure 1d). This eliminates the need for an integral over $t_d$, since the time-dependent probability of success of a single-mutant at time $t$ is fully determined by $f_u(t+t_d) \approx f_u(t)$. + +We can further simplify the analysis if we treat the probability of crossing the valley as a function of two probabilities: the probability $P_{v,1}$ that the first successful valley-crossing lineage appears before the first successful uphill lineage establishes, and the probability $P_{v,2}$ that a successful valley-crossing lineage appears after the uphill mutant establishes: + +$$P_{\text{cross}} = P_{v,1} + (1 - P_{v,1}) P_{v,2}. \quad (14)$$ + +The calculation of $P_{v,1}$ takes place on a purely wild-type background, so we can use + +$$\lambda_v = N\mu_i p_v, \quad (15)$$ \ No newline at end of file diff --git a/samples/texts/5195943/page_6.md b/samples/texts/5195943/page_6.md new file mode 100644 index 0000000000000000000000000000000000000000..46dab8d14c15593a3eb6ba3eaa82e28d54c004df --- /dev/null +++ b/samples/texts/5195943/page_6.md @@ -0,0 +1,69 @@ +where $p_v$, the probability that the intermediate lineage survives drift long enough to give rise to an ultimately successful double mutant lineage, is given by [17]: + +$$p_v = -\delta_i + \sqrt{\delta_i^2 + 4\mu_v s_v} \approx \begin{cases} 2\sqrt{\mu_v s_v} & \text{if } \delta_i \ll 2\sqrt{\mu_v s_v} \\ 2\mu_v s_v / \delta_i & \text{if } \delta_i \gg 2\sqrt{\mu_v s_v}. \end{cases} \quad (16)$$ + +Meanwhile, $\lambda_u$ is unchanged from the original analysis. $P_{v,1}$ is determined by a race between these two exponential random variables. Using basic properties of the exponential, the probability that the double-mutant appears before the uphill genotype establishes is therefore + +$$P_{v,1} = \begin{cases} \left(\frac{\lambda_v}{\lambda_v+\lambda_u}\right) e^{-\lambda_u \Delta\tau} & \text{if } \Delta\tau > 0 \\ 1-\left(\frac{\lambda_u}{\lambda_v+\lambda_u}\right) e^{-\lambda_v \Delta\tau} & \text{if } \Delta\tau < 0, \end{cases} \quad (17)$$ + +where $\Delta\tau$ represents the difference between the mean drift time $\tau_{\text{drift}}$ of the valley-crossing lineage and the mean establishment time $\tau_{\text{est}}$ of the uphill lineage, + +$$\Delta\tau = \tau_{\text{drift}} - \tau_{\text{est}}. \quad (18)$$ + +Here $\tau_{\text{est}}$ is as given above, and from [17], we can approximate the drift time as + +$$\tau_{\text{drift}} = \begin{cases} \log 2 / \sqrt{\mu_v s_v} & \text{if } \delta_i \ll 2\sqrt{\mu_v s_v} \\ 1/\delta_i & \text{if } \delta_i \gg 2\sqrt{\mu_v s_v}. \end{cases} \quad (19)$$ + +If the successful uphill lineage establishes, the first successful valley-crossing lineage still has a chance to appear and outcompete it, albeit on a declining wild-type background. Thus for $P_{v,2}$ we get a similar integral as in the original analysis. However, since we are approximating $p_i$ as constant, the rate of successful lineage generation simplifies to + +$$\begin{align} +\hat{\lambda}_v(t) &= \mu_i N_{\text{wt}} p_i(\delta_i \text{eff}, s_v \text{eff}) \\ +&= \mu_i N(1-f_u) \left(-(\delta_i+s_u f_u)+\sqrt{(\delta_i+s_u f_u)^2+4\mu_v(s_v-s_u f_u)}\right). \tag{20} +\end{align}$$ + +Integrating and assuming mutation rates are small compared to selection pressures, we find: + +$$\int_0^\infty dt \hat{\lambda}_v(t) \approx \left(\frac{\log Ns_u}{s_u}\right) N\mu_i p_i = \left(\frac{\log Ns_u}{s_u}\right) \lambda_v. \quad (21)$$ + +Combining these results, we find: + +$$\begin{align} +P_{\text{cross}} &= P_{v,1} + (1-P_{v,1}) \left(1-e^{-\left(\frac{\log Ns_u}{s_u}\right)\lambda_v}\right) \\ +&= 1-(1-P_{v,1})e^{-\left(\frac{\log Ns_u}{s_u}\right)\lambda_v}. \tag{22} +\end{align}$$ + +We expect this result to be valid provided that $\delta_i$ is effectively constant over the expected drift time. This will hold when + +$$s_u \ll \begin{cases} 2\sqrt{\frac{2}{\log 2}}\sqrt{\mu_v s_v} & \text{if } \delta_i \ll 2\sqrt{\mu_v s_v} \\ 2\delta_i & \text{if } \delta_i \gg 2\sqrt{\mu_v s_v}. \end{cases} \quad (23)$$ + +### Semi-deterministic tunneling + +We now consider the case where $N\mu_i > 1$ and $N\mu_u > 1$, and hence the Poisson process approximation used to derive Equation (13) breaks down. Fortunately, in this regime the number of single-mutant intermediates and uphill mutants in the population are well approximated by their deterministic expectation (Figure 1e). Thus the only random variable is the appearance time of the first successful double-mutant lineage, which occurs with rate + +$$\lambda_v(t) = N f_i \mu_v \pi_{iv} \approx 2 N f_i \mu_v (s_v - s_u f_u). \quad (24)$$ + +Because intermediates never make up a large portion of the population, $f_u$ is unaffected by $f_i$, and hence is still given by Equation (9). We can then approximate the frequency $f_i$ of the single-mutant intermediates using mutation-selection balance with a declining wild-type population: + +$$f_i = f_i^*(1-f_u), \quad (25)$$ + +where $f_i^*$ gives the independent deterministic dynamics (mutation-selection balance) of single mutants on the wild-type background, and $(1-f_u)$ is the size of the wild-type background. It is useful to transform this into an integral in the frequency domain: + +$$\int_0^\infty \lambda_v(t) dt = \int_0^1 \lambda_v(f_u) \left(\frac{\partial f_u}{\partial t}\right)^{-1} df_u. \quad (26)$$ + +We note from equation 9 that + +$$\frac{\partial f_u}{\partial t} = \mu_u(1-f_u) + s_u f_u(1-f_u), \quad (27)$$ + +$$t(f_u) = \frac{1}{\mu_u + s_u} \log \left( \frac{1 + (s_u/\mu_u)f_u}{1 - f_u} \right). \quad (28)$$ + +We further approximate + +$$f_i^* = \begin{cases} \frac{\mu_i t}{s_i} & \text{if } \delta_i \ll 2\sqrt{\mu_v s_v} \\ \frac{\mu_i}{s_i}(1 - \exp(-\delta_i t)) & \text{if } \delta_i \gg 2\sqrt{\mu_v s_v}. \end{cases} \quad (29)$$ + +Combining these expressions and assuming $\mu_u \ll s_u$, we find + +$$\int_0^\infty \lambda_v(t) dt = 2N\mu_i\mu_v s_v \gamma/s_u, \quad (30)$$ + +where + +$$\gamma = \begin{cases} s_u^{-1} (\frac{1}{2} \log(s_u/\mu_u)^2 - (s_u/s_v) \log(s_u/\mu_u) + \pi^2/6) & \text{if } \delta_i \ll 2\sqrt{\mu_u s_v} \\ s_i^{-1} (\log(s_u/\mu_u) - (s_u/\delta_i)(1 - (s_u/\mu_u)^{-\delta_i/s_u})) & \text{if } \delta_i \gg 2\sqrt{\mu_u s_v}. \end{cases} \quad (31)$$ \ No newline at end of file diff --git a/samples/texts/5195943/page_7.md b/samples/texts/5195943/page_7.md new file mode 100644 index 0000000000000000000000000000000000000000..41beb02a8f4b652c984b6a6f121c3f1203c1efa7 --- /dev/null +++ b/samples/texts/5195943/page_7.md @@ -0,0 +1,75 @@ +This gives + +$$ +\begin{equation} +\begin{aligned} +P_{\text{cross}} = 1 & - \exp \left[ - \int \lambda_v(t) dt \right] = 1 \\ +& - \exp \left[ \frac{-2N\mu_i\mu_v s_v \gamma}{s_u} \right] = 1 - \exp [-2N\Gamma], +\end{aligned} +\tag{32} +\end{equation} +$$ + +where we have defined the useful quantity + +$$ +\Gamma \equiv \frac{\mu_i \mu_v s_v \gamma}{s_u}. \quad (33) +$$ + +Thus we see that in very large populations, the proba- +bility of valley-crossing depends in a simple way on the +single composite parameter Γ. The form of this composite +parameter depends crucially on whether the fitness cost +of the intermediate genotype is large or small, as defined +in Equation (31). + +Discussion + +Our results have shown that the size of a population +strongly influences how “farsighted” it can be. In small +populations, genetic drift is strong relative to selection, +so the evolutionary dynamics proceeds by sequential fixa- +tion. Since fixation of a deleterious intermediate becomes +less likely in larger populations, this means that increasing +the population size initially decreases the relative influ- +ence of fitness-valley crossing. However, as the population +size increases further, beneficial mutations take longer to +fix, maintaining diversity in the population and allowing +double mutants to stochastically tunnel on the declining +wild-type background [15-17]. Together, these effects lead +to a non-monotonic relationship between population size + +and the probability that evolution will favor the complex +adaptation over the directly uphill path, as illustrated in +Figure 3a. This nonmonotonic dependence on population +size is similar in spirit to the results of earlier work ana- +lyzing evolution on epistatic landscapes in the absence of +fitness valleys [21-23]. + +It is interesting to note that $P_{\text{cross}}$ does not immedi- +ately begin to rise with the onset of tunneling. Instead, +the dependence is more complex, as a consequence of +the tradeoff between increasing mutation rates and fix- +ation times. Nevertheless, for populations in which the +transition to valley-crossing behavior occurs in the semi- +deterministic regime, we can derive a simple expression +for the threshold size at which the population will tend +to cross valleys with probability $P_{\text{cross}}$. A straightforward +inversion of (32) gives + +$$ +N = \frac{-\log[1 - P_{\text{cross}}]}{2\Gamma}, \qquad (34) +$$ + +valid for large population sizes in the semi-deterministic +regime. Thus in this regime the threshold size above +which a population exhibits a given degree of fore- +sight (i.e. has a particular Pcross) depends only on Γ. To +illustrate this, in Figure 3b we show Pcross as a func- +tion of NΓ for a variety of simulations across different +values of μu, μi, μv, su, δi and sv. It is clear that even +across a wide parameter range, NΓ is a reliable predic- +tor of valley crossing probability in the semi-deterministic +limit. + +**Figure 3 Valley crossing probability.** (a) Simulation results for $\mu_v = 5 \times 10^{-6}$, $\mu_i = \mu_v = 5 \times 10^{-5}$, $\delta_i = 0$, and $s_v = .07$. The black vertical dashed lines indicate the boundaries between sequential fixation, stochastic tunneling, and semi-deterministic tunneling. Markers represent inferred valley-crossing probabilities from 1000 simulations per point. Lines represent theoretical predictions in each regime: in the stochastic tunneling regime, dashed lines represent the slowly changing background fitness approximation, and solid lines represent the full integral solution Equation (13). The color of the line indicates the uphill fitness $s_u$. (b) Crossing probability for populations in the semi-deterministic regime across a wide range of parameters, plotted against the predictive parameter $N\Gamma$. Filled markers represent deleterious intermediates ($\delta_i = 10\sqrt{\mu_v s_v}$), while open markers represent neutral intermediates ($\delta_i = 0$). \ No newline at end of file diff --git a/samples/texts/5195943/page_8.md b/samples/texts/5195943/page_8.md new file mode 100644 index 0000000000000000000000000000000000000000..48ed47b9c288a40d99d4edc0932b80d15fdf67fd --- /dev/null +++ b/samples/texts/5195943/page_8.md @@ -0,0 +1,35 @@ +## Multiple intermediates and evolutionary predictability + +Recently, Szendro et al. [26] simulated evolution across a wide range of population sizes on an experimentally-derived epistatic fitness landscape, finding that “evolutionary entropy” (i.e. unpredictability over all possible outcomes, given by $S = -\sum p_j \log p_j$) varied nonmonotonically with population size. Specifically, these authors found that entropy initially decreased with $N$ above a characteristic population size $N \propto 1/\mu$, before increasing again above a second characteristic size $N \propto 1/\mu^2$; they argued that these points were related to the supply rate of single and double mutants respectively. Our analysis is consistent with these results. For example, the increase in entropy at $N \propto 1/\mu^2$ found by [26] corresponds to our result that valley-crossing begins to significantly influence evolution when $N \sim 1/\Gamma$: this is approximately proportional to $1/\mu^2$, albeit with an additional log dependence on $\mu$ from the $\gamma$ factor that would be harder to observe experimentally. + +We find related behavior if we extend our analysis to valleys with more intermediates. As a simple example, we consider a fitness landscape where we add deleterious intermediates $u_0$ and $v_0$ (each occurring at rate $\mu_0$ and leading to fitness cost $\delta_i$) to the uphill and valley-crossing branches respectively, so that we now have competition between a single-intermediate valley and a two-intermediate valley. In a large enough population, mutation-selection balance will ensure that + +$$N_{u_0} = N_{v_0} = \frac{N\mu_0}{\delta_i}. \qquad (35)$$ + +If we assume that these sub-populations are large enough that double-mutants behave deterministically, then we find the crossing probability obeys + +$$-\log[1-P_{\text{cross}}] = 2\left(\frac{\mu_0}{\delta_i}\right)N\Gamma = 2\left(\frac{s_v\gamma}{s_u\delta_i}\right)N\mu_0\mu_i\mu_v. \qquad (36)$$ + +Thus our analysis of valleys with multiple intermediates suggests that the $N\mu^2$ entropy peak is not unique: as the population grows larger, there should be entropy peaks corresponding to foresight across valleys with increasing numbers of intermediates. The emergence of such second peaks has been observed in simulations [23], and our model offers a quantitative outline of where such peaks should occur given the relevant evolutionary parameters. In the example above, for instance, there would be an entropy peak approximately proportional to $N\mu^3$, and in general, we could expect entropy peaks at points approximately proportional to $N\mu^n$ for $(n-1)$-intermediate valleys. In practice, however, the semi-deterministic approximation will break down for any sizable number of intermediates, unless the population size is unrealistically large. + +## Many paths + +Throughout this paper, we have assumed the presence of a single uphill mutation and fitness valley. We now consider how our analysis can be extended to predict how evolution chooses among many such possible mutational trajectories. + +In small populations that are in the sequential fixation regime, we simply add additional transient transition matrix elements representing different mutations, with the uphill mutations transitioning to the uphill absorbing state, and similarly for the valley-crossing mutations. When stochastic tunneling is important, we must instead add the rates of single valley-crossing mutants to get a total rate of + +$$\Lambda_{\nu} = \sum_{\nu} \lambda_{\nu}, \qquad (37)$$ + +and similarly find the total rate of uphill mutants that are destined to survive drift, + +$$\Lambda_u = \sum_u \lambda_u = \sum_u N\mu_u\pi(s_u) = \int N U_b \pi(s_u) \rho(s_u) ds_u, \qquad (38)$$ + +where in the last equality we have replaced a discrete collection of uphill mutations with a continuous distribution of uphill fitness effects $\rho(s)$, and a discrete collection of mutation rates $\mu_u$ with a total beneficial mutation rate $U_b$. This is valid as long as $\Lambda_u \ll 1$. Once an uphill mutation destined to survive drift occurs, the probability that it has fitness $s$ is given by the ratio between its partial rate and the total rate; formally, the probability density is given by: + +$$f(s) = N\mu_u\pi(s)\rho(s)/\Lambda_u. \qquad (39)$$ + +Using these expressions, we can integrate our results from the analysis over all possible trajectories. However, we note that if there are a large number of weakly beneficial mutations, it is possible the first successful lineage to appear will be outcompeted by a stronger uphill mutation that arises later but fixes first. Our analysis applies provided that we consider only uphill mutations that reach a significant portion $k$ of the population before a new, more fit uphill mutant is expected to be produced: i.e. + +$$\Lambda_u \tau_k = \left( \int_{s_{\text{cutoff}}}^{\infty} N \mu_s \pi(s) \rho(s) ds \right) (\tau_k) < 1, \qquad (40)$$ + +where $\tau_k$ is the expected time for a single-mutant destined for success to make up frequency $k$ of the population. This is consistent with our intuition that as the population size grows larger, we increasingly expect the mutations of largest effect to dominate the dynamics. \ No newline at end of file diff --git a/samples/texts/5195943/page_9.md b/samples/texts/5195943/page_9.md new file mode 100644 index 0000000000000000000000000000000000000000..3ffd1f01e9819dcd0118a1b0f9c94276137ebb07 --- /dev/null +++ b/samples/texts/5195943/page_9.md @@ -0,0 +1,90 @@ +## Conclusions + +Using a simple three-locus fitness landscape model (Figure 1a), we identified several regimes of valley-crossing with qualitatively different behavior (Figure 2). By examining the behavior in each of these regimes in turn, we found that the probability of valley-crossing has a complex, non-monotonic dependence on population size (Figure 3a), and identified a parameter Γ that reliably predicts the population size at which valley crossing becomes preferred (Figure 3b). Finally, we showed how these results can be extended to fitness valleys with more intermediates, and to fitness landscapes with many possible evolutionary trajectories, as is the case in most naturally occurring populations. + +### Abbreviation + +DFE: Distribution of fitness effects. + +### Competing interests + +The authors declare that they have no competing interests. + +### Authors' contributions + +IEO and MMD designed the research, conducted the analysis, and wrote the paper. Both authors read and approved the final manuscript. + +### Acknowledgements + +We thank Benjamin Good, Sergey Kryazhimskiy, Elizabeth Jerison, and other members of the Desai lab for useful discussions, and Katya Kosheleva for help with the figures. This work was supported by the Harvard HCRP, Harvard Herchel Smith, and Harvard PRISE programs (I.E.O.), and the James S. McDonnell Foundation, the Alfred P. Sloan Foundation, the Harvard Milton Fund, grant PHY 1313638 from the NSF, and grant GM104239 from the NIH (M.M.D.). Simulations in this paper were performed on the Odyssey cluster supported by the Research Computing Group at Harvard University. + +Received: 25 November 2014 Accepted: 11 March 2015 +Published online: 26 March 2015 + +### References + +1. Perfeito L, Fernandes L, Mota C, Gordo I. Adaptive mutations in bacteria: High rate and small effects. Science. 2007;317(5839):813-5. + +2. Tenaillon O, Rodríguez-Verdugo A, Gaut RL, McDonald P, Bennett AF, Long AD, et al. The molecular diversity of adaptive convergence. Science. 2012;335(6067):457-61. + +3. Lang GI, Rice DP, Hickman MJ, Sodergren E, Weinstock GM, Botstein D, Desai MM. Pervasive genetic hitchhiking and clonal interference in forty evolving yeast populations. Nature. 2013;500(7464):571-4. + +4. Gerrish P, Lenski R. The fate of competing beneficial mutations in an asexual population. Genetica. 1998;102:127-44. + +5. Schiffels S, Szollosi GJ, Mustonen V, Lassig M. Emergent neutrality in adaptive asexual evolution. Genetics. 2011;189(4):1361-75. + +6. Good BH, Rouzine IM, Balick DJ, Hallatschek O, Desai MM. Distribution of fixed beneficial mutations and the rate of adaptation in asexual populations. Proc Nat Acad Sci. 2012;109:4950-5. + +7. Weinreich DM, Watson RA, Chao L. Perspective: Sign epistasis and genetic constraint on evolutionary trajectories. Evolution. 2005;59(6):1165-74. + +8. Weinreich DM, Delaney NF, DePristo MA, Hartl DL. Darwinian evolution can follow only very few mutational paths to fitter proteins. Science. 2006;312(5770):111-4. + +9. Poelwijk FJ, Kiviet DJ, Weinreich DM, Tans SJ. Empirical fitness landscapes reveal accessible evolutionary paths. Nature. 2007;445:383-6. + +10. Kvitek DJ, Sherlock G. Reciprocal sign epistasis between frequently experimentally evolved adaptive mutations causes a rugged fitness landscape. PLoS Genet. 2011;7(4):e1002056. + +11. Silva RF, Mendonça SC, Carvalho LM, Reis AM, Gordo I, Trindade S, et al. Pervasive sign epistasis between conjugative plasmids and drug-resistance chromosomal mutations. PLoS Genet. 2011;7(7):e1002181. + +12. Dawid A, Kiviet DJ, Kogenaru M, de Vos M, Tans SJ. Multiple peaks and reciprocal sign epistasis in an empirically determined genotype-phenotype landscape. Chaos: Interdisciplinary J Nonlinear Sci. 2010;20(2):026105. + +13. Salverda ML, Dellus E, Gorter FA, Debets AJ, Van Der Oost J, Hoekstra RF, et al. Initial mutations direct alternative pathways of protein evolution. PLoS Genet. 2011;7(3):e1001321. + +14. Woods RJ, Barrick JE, Cooper TF, Shrestha U, Kauth MR, Lenski RE. Second-order selection for evolvability in a large escherichia coli population. Science. 2011;331(6023):1433-6. + +15. Weinreich DM, Chao L. Rapid evolutionary escape by large populations from local fitness peaks is likely in nature. Evolution. 2005;59:1175-82. + +16. Iwasa Y, Michor F, Nowak MA. Stochastic tunnels in evolutionary dynamics. Genetics. 2004;166(3):1571-9. + +17. Weissman DB, Desai MM, Fisher DS, Feldman MW. The rate at which asexual populations cross fitness valleys. Theor Population Biol. 2009;75(4):286-300. + +18. Gokhale CS, Iwasa Y, Nowak MA, Traulsen A. The pace of evolution across fitness valleys. J Theor Biol. 2009;259(3):613-20. + +19. Altland A, Fischer A, Krug J, Szendro IG. Rare events in population genetics: stochastic tunneling in a two-locus model with recombination. Phys Rev Lett. 2011;106(8):088101. + +20. Weissman DB, Feldman MW, Fisher DS. The rate of fitness-valley crossing in sexual populations. Genetics. 2010;186(4):1389-410. + +21. Rozen DE, Habets MG, Handel A, de Visser JAG. Heterogeneous adaptive trajectories of small populations on complex fitness landscapes. PLoS One. 2008;3(3):e1715. + +22. Handel A, Rozen DE. The impact of population size on the evolution of asexual microbes on smooth versus rugged fitness landscapes. BMC Evolutionary Biol. 2009;9(1):236. + +23. Jain K, Krug J, Park S-C. Evolutionary advantage of small populations on complex fitness landscapes. Evolution. 2011;65(7):1945-55. + +24. Van Nimwegen E, Crutchfield JP. Metastable evolutionary dynamics: crossing fitness barriers or escaping via neutral paths?. Bull Math Biol. 2000;62(5):799-848. + +25. Desai MM, Fisher DS. Beneficial mutation-selection balance and the effect of linkage on positive selection. Genetics. 2007;176(3):1759-8. + +26. Szendro IG, Franke J, de Visser JAG, Krug J. Predictability of evolution depends nonmonotonically on population size. Proc Nat Acad Sci. 2013;110(2):571-6. + +Submit your next manuscript to BioMed Central and take full advantage of: + +* Convenient online submission + +* Thorough peer review + +* No space constraints or color figure charges + +* Immediate publication on acceptance + +* Inclusion in PubMed, CAS, Scopus and Google Scholar + +* Research which is freely available for redistribution \ No newline at end of file diff --git a/samples/texts/6591221/page_1.md b/samples/texts/6591221/page_1.md new file mode 100644 index 0000000000000000000000000000000000000000..570184f338b940a8922f615e91f856a40ed3143a --- /dev/null +++ b/samples/texts/6591221/page_1.md @@ -0,0 +1,31 @@ +VARIATIONS IN SOLAR LUMINOSITY FROM TIMESCALES OF MINUTES TO MONTHS + +JON D. PELLETIER¹ + +Received 1995 September 13; accepted 1996 March 5 + +ABSTRACT + +We present the power spectrum of solar irradiance during 1985 and 1987, obtained from the active cavity radiometer irradiance monitor project from timescales of minutes to months. At low frequency, the spectra are Lorentzian [proportional to $1/(f^2 + f_0^2)$]. At higher frequencies, they are proportional to $f^{-1/2}$. A linear, stochastic model of the turbulent heat transfer between the granulation layer (modeled as a homogeneous thin layer with a radiative boundary condition) and the rest of the convection zone (modeled as a homogeneous thick layer with thermal and diffusion constants appropriate the lower convection zone) explains the observed spectrum. + +*Subject headings:* convection — diffusion — Sun: activity — turbulence + +# 1. INTRODUCTION + +The luminosity of the Sun has significant variations on timescales from minutes to years. Data from the NIMBUS 7 and active cavity radiometer irradiance monitor (ACRIM) projects have provided us with a high-quality time series of these variations. Some aspects of these variations can be attributed to specific physical processes (see Stix 1989, for an introductory review). For example, oscillations of the Sun result in a well-understood 5 minute periodicity. At the yearly and decadal timescale, there is a strong correlation between the irradiance and variations in the solar magnetic activity. Variations from minutes to months have a simple spectral form but are not well understood (Frohlich 1993). + +Frohlich (1987) has published power spectra of ACRIM data from 1980 and 1985. He reported that the power spectrum is flat for frequencies corresponding to timescales greater than 1 week, proportional to $f^{-2}$ for timescales between 12 hr and 1 week, and proportional to $f^{-1}$ at timescales down to minutes. Our studies agree that the low-frequency spectrum is Lorentzian. We find, however, that the high-frequency spectrum is proportional to $f^{-1/2}$ at timescales shorter than 1 day. + +Kuhn, Libbrecht, & Dicke (1988) have suggested that the interaction between the solar surface and deeper portions of the convection zone may cause the low-frequency variations. We explore that possibility in this Letter. + +The simplest way to model the convective transport of heat is to assume that the flux of heat is proportional to its gradient. Fluctuations in heat energy will then be governed by the diffusion equation. This is a valid approximation if the turbulence is small scale (dominated by eddies whose length scale is much smaller than that of the mean gradient of potential temperature; Garratt 1992). Because of the stochastic nature of turbulence, a stochastic diffusion model is appropriate for transport with the flux-gradient approximation. In this Letter, we study the fluctuations in luminosity of a thin homogeneous surface granulation layer with a radiation boundary condition, exchanging heat with a deep, homogeneous layer below, with density and diffusion constants appropriate to the lower convection zone. The high- and low-crossover frequencies in the spectrum correspond to timescales of thermal and radiative equilibration of the convection zone, respectively. The timescales for equilibration are given by the model as a function of thermal and diffusion constants of the granulation layer and lower convection zone. Estimates of these constants obtained from mixing length theory yield order-of-magnitude agreement between the crossover frequencies predicted by the model and those observed. + +# 2. POWER SPECTRUM OF ACRIM DATA + +In Figure 1, we present the logarithm (base 10) of the normalized Lomb periodograms of ACRIM solar irradiance data sampled during 1987 and 1985, plotted as a function of the logarithm of the frequency. We chose to analyze these years since they appear to represent extremes in the variation of solar activity at low frequencies. The low-frequency variances in the 1985 and 1987 data are small and large, respectively. Variations in the solar irradiance at yearly timescales are generally agreed to be the result of variations in magnetic activity. We have chosen these years in order to assess the influence of magnetic activity on the power spectrum and distinguish its influence from that of the mechanism proposed in our model. + +Since the data were sampled at irregular intervals, simple fast Fourier transform (FFT) methods of estimating the power spectrum are only available if we average the data over some uniform time interval. We chose instead to use the Lomb periodogram suggested by Press et al. (1992) for unevenly sampled data. Above frequencies of $\log f = -2.0$, we averaged the periodogram in logarithmically spaced frequency intervals of $\log f = 0.01$, in order to reduce the scatter. We subtracted $\log S(f)$ by 2.5 to plot it on the same graph as the 1987 data. + +The high-frequency behavior is the same for both spectra. An $f^{-2}$ region flattens out to $f^{-1/2}$ at frequencies greater than $f \approx 1/(1 \text{ day})$. Our observation of a $f^{-1/2}$ scaling region at high frequencies disagrees with Frohlich's (1987) conclusion that the high-frequency scaling region is proportional to $f^{-1}$. He reported this conclusion for 1985 ACRIM data, the same data we analyzed + +¹ Department of Geological Sciences, Snee Hall, Cornell University, Ithaca, NY 14853. \ No newline at end of file diff --git a/samples/texts/6591221/page_2.md b/samples/texts/6591221/page_2.md new file mode 100644 index 0000000000000000000000000000000000000000..6efbc75ca85b5d306e1f3c83c37395341f0efaf5 --- /dev/null +++ b/samples/texts/6591221/page_2.md @@ -0,0 +1,25 @@ +FIG. 1.—Logarithm (base 10) of the normalized Lomb periodograms of solar irradiance in 1987 (upper plot) and 1985 (lower plot) from the ACRIM project vs. the logarithm of the frequency in $hr^{-1}$. Crossover frequencies for 1987 are $f_0 = 1/(5 \text{ months})$ and $f_1 = 1/(1 \text{ day})$. The crossover frequencies for 1985 are $f_0 = 1/(1 \text{ month})$ and $f_1 = 1/(1 \text{ day})$. The 1985 spectrum is shifted down by $\log S(f) = 2.5$. + +as part of our study. Our confidence in our interpretation lies in the greater resolution of our spectra, the $f^{-1/2}$ range of which is 50% larger than Frohlich's (1987). Large peaks appear at the orbital frequency of the satellite and its harmonics. These peaks are an artifact of the spectral estimation. + +The low-frequency behavior of both spectra is Lorentzian (constant at low frequencies and proportional to $f^{-2}$ at higher frequencies), in agreement with Frohlich's (1987) results. Aside from the basic form of the spectra, there is a large variability between the crossover frequencies and the magnitude of the two spectra reported here and those published by Frohlich (1987). This variability was also discussed by Frohlich (1987). The crossover frequencies of the Lorentzian portion of the 1987 and 1985 data reported here are $f = 1/(5 \text{ months})$ and $f = 1/(1 \text{ month})$, respectively. We interpret this variability as being due to either variations in the magnetic activity or to limitations of our model at these timescales. + +### 3. MODEL OF VARIATIONS IN SOLAR LUMINOSITY + +The variations in the irradiance of the Sun will be proportional to the variations in its surface temperature. This follows from the fact that the power emitted by the Sun (modeled as a blackbody), $F - F_e = \sigma T^4 - \sigma T_e^4$, can be well approximated by a linear dependence on $T - T_e$ for small departures from equilibrium. + +Turbulent transport of heat in the convection zone of the Sun can be modeled by a stochastic diffusion process within the flux-gradient approximation. A stochastic diffusion process can be studied analytically by adding a noise term to the flux of a deterministic diffusion equation (van Kampen 1981): + +$$ \rho c \frac{\partial \Delta T}{\partial t} = - \frac{\partial J}{\partial x}, \qquad (1) $$ + +$$ J = -\sigma \frac{\partial \Delta T}{\partial x} + \eta(x, t), \qquad (2) $$ + +where $\Delta T$ are the fluctuations in temperature from equilibrium and the mean and variance of the noise are given by + +$$ \langle \eta(x, t) \rangle = 0, \qquad (3) $$ + +$$ \langle \eta(x, t)\eta(x', t') \rangle \propto \sigma(x)\langle T(x) \rangle^2 \delta(x-x')\delta(t-t'). \qquad (4) $$ + +To show how a power spectrum proportional to $f^{-1/2}$ can arise from a stochastic diffusion process, we will calculate the power spectrum of temperature fluctuations in a layer of width $2l$ exchanging heat with an infinite, one-dimensional, homogeneous space. Our derivation is nearly the same as that of Voss & Clarke (1976). Defining the Fourier transform as + +$$ J(k, \omega) = \int_{-\infty}^{\infty} dx \int_{-\infty}^{\infty} dt e^{-ikx} e^{i\omega t} J(x, t), \qquad (5) $$ \ No newline at end of file diff --git a/samples/texts/6591221/page_3.md b/samples/texts/6591221/page_3.md new file mode 100644 index 0000000000000000000000000000000000000000..e8ce9ad91aba0320dbe8e9c25999fb24e6c8342c --- /dev/null +++ b/samples/texts/6591221/page_3.md @@ -0,0 +1,41 @@ +the Fourier transform of the heat flux of the stochastic diffusion equation is + +$$J(k, \omega) = \frac{i\omega\eta(k, \omega)}{Dk^2 - i\omega}. \quad (6)$$ + +The rate of change of heat energy in the layer will be given by the difference in heat flux out of the boundaries, located at ±1: $dE(t)/dt = J(l, t) - J(-l, t)$. The Fourier transform of $E(t)$ is then + +$$E(\omega) = \frac{1}{(2\pi)^{1/2}} \omega \int_{-\infty}^{\infty} dk \sin(kl) J(k, \omega). \qquad (7)$$ + +The power spectrum of variations in $E(t)$, $S_E(\omega) = |E(\omega)|^2$ is + +$$S_E(\omega) \propto \int_{-\infty}^{\infty} \frac{dk \sin^2(kl)}{D^2 k^4 + \omega^2} \propto \omega^{-1/2} \qquad (8)$$ + +for low frequencies. Since $\Delta T \propto \Delta E$, $S_T(\omega) \propto \omega^{-1/2}$, as well. + +At lower frequencies, the entire convection zone achieves thermal equilibrium and the lower convection zone can no longer absorb fluctuations in the heat flux from the radiation boundary condition. The fluctuating heat flux near the surface adds and subtracts heat from the top of the convection zone through the radiation boundary condition. This results in temperature and irradiance variations with a random-walk ($f^{-2}$) spectrum. + +The fluctuating input and output of heat in the $f^{-2}$ region will cause large variations from equilibrium. When the temperature of the convection zone becomes larger than the equilibrium temperature, it will radiate, on average, more heat than at equilibrium. Conversely, when the temperature of the convection zone wanders lower than the equilibrium temperature, less heat is radiated. This negative feedback limits the variance at low frequencies, resulting in a constant power spectrum. + +The model we present was solved in the context of a different problem by van Vliet, van der Ziel, & Schmidt (1980). They considered the temperature fluctuations in a thin metal film supported by a substrate. + +The geometry of the model is a thin granulation layer of width $2 \times 10^6$ m and uniform density (equal to the density at the bottom of the granulation layer, where most of the heat capacity resides) of 0.003 kg m⁻³ coupled to a thick layer of uniform density representing the rest of the convection zone. For convenience, the layers have a planar geometry in this simplified model. The turbulent diffusivity is estimated from mixing length theory to be $\alpha = \frac{1}{3}vl$, where v and l are the characteristic velocity and eddy sizes, respectively. The eddy size, l, is usually approximated as 1 pressure scale height. In the case of the granulation layer, however, the dominant eddy size is the size of the convection cell, approximately $2 \times 10^6$ m. The velocity at the bottom of the granulation layer is on the order of 1000 m s⁻¹. These estimates yield an eddy diffusivity of $10^9$ m² s⁻¹ near the solar surface. The thermal conductivity, $\sigma = \alpha\rho c$, is $3 \times 10^7$ W m⁻¹ K⁻¹ since $c = 10$ J kg⁻¹ K⁻¹ for a monatomic hydrogen gas. The velocity, thickness, and specific heat values are from a standard solar model presented in Stix (1992). The density values are from Bohm (1963). + +The granulation layer sits atop the rest of the convection zone. We approximate the density, width, and diffusivity of the remainder of the convection zone by their values near the bottom of the convection zone because its high density results in a concentration of heat capacity there and its slow diffusivity is the rate-limiting step of thermal equilibration of the convection zone. The width and dominant eddy scale will both be given by $10^7$ m, the width of the lowest pressure scale height of the convection zone. The density is estimated to be 0.5 kg m⁻³, and the velocity is on the order of 20 m s⁻¹. These values are the arithmetic means of the densities and velocities at the top and bottom of the pressure scale height. These values yield an eddy diffusivity of $10^8$ m² s⁻¹ for the bottom of the convection zone. Our diffusivities agree with the estimates of Stix (1992), who quoted the range of diffusivities in the convection zone as $10^8-10^9$ m² s⁻¹. The thermal conductivity is $3 \times 10^8$ W m⁻¹ K⁻¹. + +The equation for temperature fluctuations in space and time in the model is + +$$\frac{\partial \Delta T(x, t)}{\partial t} - \alpha(x) \frac{\partial^2 \Delta T(x, t)}{\partial x^2} = - \frac{\partial \eta(x, t)}{\partial x}, \qquad (9)$$ + +with + +$$\langle \eta(x, t) \rangle = 0, \qquad (10)$$ + +$$\langle (\eta(x,t)\eta(x',t')) \rangle \propto \sigma(x) \langle T(x) \rangle^2 \delta(x-x') \delta(t-t'). \qquad (11)$$ + +The boundary conditions are that there be no heat flow out of the bottom of the convection zone and that continuity of temperature and heat flux at the boundary separating the granulation layer and the deeper convection zone be given by + +$$\sigma' \left. \frac{\partial T}{\partial x} \right|_{x=w_2} = 0, \qquad (12)$$ + +$$\Delta T(x = w_1^+) = \Delta T(x = w_1^-), \qquad (13)$$ + +$$\sigma \left. \frac{\partial \Delta T}{\partial x} \right|_{x=w_1^-} = \sigma' \left. \frac{\partial \Delta T}{\partial x} \right|_{x=w_1^+}, \qquad (14)$$ \ No newline at end of file diff --git a/samples/texts/6591221/page_4.md b/samples/texts/6591221/page_4.md new file mode 100644 index 0000000000000000000000000000000000000000..b6095e9baad742fb029c80994c4a85d07706e33b --- /dev/null +++ b/samples/texts/6591221/page_4.md @@ -0,0 +1,47 @@ +where $w_1$ and $w_2$ are the widths of the granulation layer and deep convection zone, respectively, and the primes denote the thermal and diffusion constants of the deep convection zone. + +At the top of the granulation layer, we impose a blackbody radiation boundary condition linearized about equilibrium, + +$$ \sigma \left. \frac{\partial \Delta T}{\partial x} \right|_{x=0} = g \Delta T (x=0), \qquad (15) $$ + +where $g = 4\sigma_B T_0^3 = 2 \times 10^4$ W m$^{-2}$ K$^{-1}$ is the thermal conductance of heat out of the Sun and $\sigma_B$ is the Stefan-Boltzmann constant. + +Van Vliet et al. (1980) used Green's functions to solve this model. The power spectrum of the average temperature in the granulation layer, and hence the irradiance, is, using their solution, + +$$ S(f) \propto \operatorname{Re} \left[ \left[ L^2 \left\{ \frac{\sigma'L}{\sigma L'} \tanh \left( \frac{w_2}{L} \right) \left[ \left( \frac{gw_1}{\sigma} - 1 \right) \tanh \left( \frac{w_1}{L} \right) - \frac{2gL}{\sigma} \frac{\cosh (w_1/L) - 1}{\cosh (w_1/L)} + \frac{w_1}{L} \right] + \frac{gw_1}{\sigma} + \left[ \frac{w_1}{L} - \frac{gL}{\sigma} \tanh \left( \frac{w_1}{L} \right) \right] \right\} \left\{ \left[ \tanh \left( \frac{w_1}{L} \right) + \frac{\sigma L}{g} \right] \frac{\sigma'L}{\sigma L'} \tanh \left( \frac{w_2}{L'} \right) + \left[ 1 + \frac{\sigma}{Lg} \tanh \left( \frac{w_1}{L} \right) \right]^{-1} \right\} \right] . \quad (16) $$ + +For very low frequencies, + +$$ \tanh\left(\frac{w_1}{L}\right) \approx \frac{w_1}{L}, \quad \tanh\left(\frac{w_2}{L'}\right) \approx \frac{w_2}{L'}, \qquad (17) $$ + +$$ \frac{\cosh(w_1/L) - 1}{\cosh(w_1/L)} \approx \frac{1}{2} \frac{w_1^2}{L^2}. \qquad (18) $$ + +Reducing equation (16), + +$$ S_{\Delta T_{\text{av}}} (f) \propto \frac{1}{1 + (\omega^2/\omega_0^2)} \propto \frac{1}{f^2 + f_0^2}, \qquad (19) $$ + +which is the low-frequency Lorentzian spectrum observed in the ACRIM data. The crossover frequency as a function of the constants chosen for the model is + +$$ f_0 = \frac{g}{2\pi[(w_1 c\rho + w_2 c'\rho')(1 + gw_1/\sigma)]} \approx \frac{\sigma}{2\pi w_1 w_2 c'c'} \approx \frac{1}{8 \text{ months}}, \qquad (20) $$ + +which is within an order of magnitude of the observed crossover frequencies of the 1987 and 1985 ACRIM data, $f = 1/(5$ months) and $f = 1/(1$ month), respectively. + +At low frequencies, + +$$ \tanh\left(\frac{w_1}{L}\right) \approx \frac{w_1}{L}, \quad \tanh\left(\frac{w_2}{L'}\right) \approx 1, \qquad (21) $$ + +$$ \frac{\cosh(w_1/L) - 1}{\cosh(w_1/L)} \approx \frac{1}{2} \frac{w_1^2}{L^2}; \qquad (22) $$ + +then + +$$ S_{\Delta T_{\text{av}}} (f) \propto \frac{1}{2} \left( \frac{2gw_1}{\sigma} \right)^{1/2} \left( \frac{c\rho\sigma}{c'\rho'\sigma'} \right)^{1/2} \left( \frac{g}{w_1\rho c f} \right)^{1/2} \propto f^{-1/2}, \qquad (23) $$ + +as observed. + +The high- and low-frequency spectra meet at + +$$ f_1 = \frac{g}{w_1\rho c} \left( \frac{\sigma}{2gw_1} \right)^{1/3} \left( \frac{c'\rho'\sigma'}{c\rho\sigma} \right)^{1/3} 4^{1/3} \left( \frac{c\rho w_1}{c'\rho'w_2} \right)^{4/3} \approx \frac{1}{6\,\text{hr}}, \qquad (24) $$ + +which also agrees to within an order of magnitude with the crossover frequency of the ACRIM data, $f = 1/(1$ day). + +We have applied the same model to the natural variability of climate (Pelletier 1995). The above model gives the fluctuations in the average temperature in a thin, homogeneous layer (Earth's atmosphere), coupled with another homogeneous layer with different thermal and diffusion constants (the ocean). As part of that work, we analyzed the Vostok ice core. We found the same spectral form reported here in the ACRIM data. The thermal and radiative timescales in that data set were 2000 and 40,000 yr, respectively. Estimates of those timescales based upon thermal constants and diffusion constants inferred from tracer studies in the atmosphere and ocean matched well the timescales of the crossover frequencies of the Vostok data. \ No newline at end of file diff --git a/samples/texts/6591221/page_5.md b/samples/texts/6591221/page_5.md new file mode 100644 index 0000000000000000000000000000000000000000..785582bbbf0c6a1d34692d7701153fa237eb8e71 --- /dev/null +++ b/samples/texts/6591221/page_5.md @@ -0,0 +1,23 @@ +4. CONCLUSIONS + +We have presented evidence that the power spectrum of variations in solar irradiance exhibits three scaling regions. We presented a model, originally from van Vliet et al. (1980), proposed to study temperature fluctuations in a metallic film (granulation layer) supported by a substrate (deep convection zone) that matches the observed frequency dependence of the power spectrum of irradiance fluctuations. + +I wish to thank Donald Turcotte and Ed Salpeter for helpful conversations of related work. I am indebted to Sandy Kwan of JPL who provided me with the ACRIM data. + +REFERENCES + +Bohm, K. H. 1963, ApJ, 137, 881 +Frohlich, C. 1987, J. Geophys. Res., 92, 796 +———. 1993, Adv. Space Res., 13, 9, 429 +Garratt, J. R. 1992, The Atmospheric Boundary Layer (Cambridge: Cambridge Univ. Press) +Kuhn, J. R., Libbrecht, K. G., & Dicke, R. H. 1988, Science, 242, 908 +Pelletier, J. D. 1995, J. Climate, submitted +Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. 1992, +Numerical Recipes in C: The Art of Scientific Computing (2d. ed.; Cam- +bridge: Cambridge Univ. Press) + +Stix, M. 1989, The Sun: An Introduction (Berlin: Springer) +van Kampen, N. G. 1981, Stochastic Processes in Physics and Chemistry +(Amsterdam: North-Holland) +van Vliet, K. M., van der Ziel, A., & Schmidt, R. R. 1980, J. Appl. Phys., 51, 2947 +Voss, R. F., & Clarke, J. 1976, Phys. Rev. B, 13, 556 \ No newline at end of file diff --git a/samples/texts/7104097/page_10.md b/samples/texts/7104097/page_10.md new file mode 100644 index 0000000000000000000000000000000000000000..6b016d0a326cae10af5cc063217bb14f450ac6d4 --- /dev/null +++ b/samples/texts/7104097/page_10.md @@ -0,0 +1,11 @@ +**Preprocessing.** In the deterministic algorithms we first compressed the input array by converting each word into a single bit, and then constructed $L_0$ and $R_0$ arrays from the compressed array. In the current algorithm we build the $L_0$ and $R_0$ arrays directly from the uncompressed input. For $p \in [0, \frac{1}{2} \log(N/N_1)]$, the $L_p$ and $R_p$ arrays are stored as CM sketches while $p \in [\frac{1}{2} \log(N/N_1) + 1, \log N]$ the arrays are stored directly as in the deterministic case. Each $L_p[i]$ is added as $(L_p[i] + 1) \bmod (2^p + 1)$ to the CM sketch (similarly for $R_p[i]$). Thus a nonzero entry (of value at most $2^p$) is added to the CM sketch provided the corresponding block contains a 1, otherwise nothing is added. As a result for any given $L_p$ summation of all entries added to the CM sketch is at most $N_1 \times 2^p$, and we set $\varepsilon = \frac{1}{2 \times N_1 \times 2^p}$ for that sketch. + +**Query Execution.** Given a query $R_1 Q_A(i, j)$, we use the MSB of $j-i+1$ to find the largest value of $p$ with $2^p \le j-i+1$, and then follow the approach for answering case (b) of inter-word queries described in Section 2.1. If $2^p > \sqrt{N}/N_1$, we use $L_p$ and $R_p$ arrays to answer the query correctly, otherwise we use the $L_p$ and $R_p$ values obtained from the corresponding CM sketches. + +**Error Bound.** If the query range is larger than $\sqrt{N/N_1}$, the answer is always correct. For smaller queries we use CM sketches. Recall that for $p \in [0, \frac{1}{2}\ln(N/N_1)]$, we store each $L_p$ (and $R_p$) as a CM sketch with parameter $\varepsilon = \frac{1}{2 \times N_1 \times 2^p}$. Hence, the estimated value $\hat{L}_p[i]$ of an entry $L_p[i]$ returned by the CM sketch is between $L_p[i]$ and $L_p[i] + \varepsilon||L_p|| \le L_p[i] + 0.5$ with probability at least $1-\delta$. In other words, with probability at least $1-\delta$, the CM sketch returns the correct value. In order to answer an R1Q we need to access at most four CM sketches. Hence, with probability at least $(1-\delta)^4 \ge 1-4\delta$, the query will return the correct answer. + +**Theorem 3.** Given a 1-D bit array of length $N$ containing $N_1$ nonzero entries, and a parameter $\delta \in (0, \frac{1}{4})$, one can construct a data structure occupying $\mathcal{O}(\sqrt{NN_1} \log N \log(\frac{1}{\delta}))$ bits (and discard the input array) to answer each R1Q correctly in $\mathcal{O}(\ln \frac{1}{\delta})$ worst-case time with probability at least $1-4\delta$. For query ranges larger than $\sqrt{N/N_1}$ the query result is always correct. + +By tweaking the algorithm described above slightly, we can reduce the space complexity even further at the cost of providing a weaker correctness guarantee. We assume that we are given an additional parameter $\gamma \in (0, \frac{1}{4})$. The required modification is described below. + +For each $p \in [0, \log N]$, we store the $L_p$ and $R_p$ arrays as CM sketches. However, instead of adding a value $v$ directly to a CM sketch, we now add a $(1+\gamma)$ approximation of $v$. More precisely, we add $[\log_{1+\gamma}(1+v)]$ instead of $v$. Hence, for a given $L_p$, the summation of all entries added to its CM sketch is at most $N_1[\log_{1+\gamma}(1+2^p)]$, and so we set the parameter $\varepsilon$ to $1/(2N_1[\log_{1+\gamma}(1+2^p)])$ for that sketch. The total space used by all CM sketches can be shown to be $\mathcal{O}(N_1 \log^3 N \log_{1+\gamma}(1/\delta))$. We store a lookup table of size $\mathcal{O}(\log^2 N)$ for conversions from $[\log_{1+\gamma}(1+v)]$ to $v$, and an MSB table of size $\mathcal{O}(N^{1/c} \log \log N)$ for some given integer constant $c>1$. \ No newline at end of file diff --git a/samples/texts/7675318/page_2.md b/samples/texts/7675318/page_2.md new file mode 100644 index 0000000000000000000000000000000000000000..58ed856552b009bb88dddf71bdeb867508cb4c1b --- /dev/null +++ b/samples/texts/7675318/page_2.md @@ -0,0 +1,48 @@ +The computation of the Brown-Peterson cohomology above is done by Kono and +Yagita in [6] and the result above is conjectured in [6]. + +**Theorem 1.2.** For $(G, p) = (G_2, 2), (E_6, 2), (F_4, 3), (E_7, 3), (E_8, 5)$ and $(PU/pm), p)$ +where $p \nmid m$, the $E_2$-term of the Adams spectral sequence for $P(n)^*(BG)$ has no odd +degree elements and it collapses at the $E_2$-level. In particular, we have + +$$K(n)^*(BG) = K(n)^* \otimes_{BP^*} BP^*(BG)$$ + +and the Morava K-theory $K(n)^*(BG)$ has no odd degree elements. + +Before we begin to deal with the computation, we would like to mention the interpretation of K-theory in terms of representation theory. Recall the following theorem. + +**Theorem 1.3** (Atiyah-Segal). *There is an isomorphism* + +$$R(G)^{\wedge} \to K(BG),$$ + +where $R(G)^{\wedge}$ is the completion of the complex representation ring of $G$ with respect +to the argumentation ideal. + +Also, Morava $K(0)$-theory is the ordinary cohomology with coefficient in the ratio- +nal numbers $\mathbb{Q}$ or its $p$-completion $\mathbb{Q}_p$. Thus, through deRham theory, it is related +to differential forms. Morava $K(1)$-theory is related to the $p$-localization of the +complex $K$-theory and to vector bundles. Morava $K(2)$-theory is related to the el- +liptic cohomology. Many mathematicians dream of geometric and/or representation +theoretical interpretation of Morava $K$-theories and related cohomology theories, +e.g. elliptic cohomology, complex cobordism theory. We hope the computation of +Morava $K$-theories of classifying spaces might shed some light on such geometric +and/or representation theoretical interpretation. + +## 2. ORDINARY COHOMOLOGY THEORY + +As we already mentioned, when $G$ has no $p$-torsion, we have a satisfactory result: + +**Theorem 2.1.** If $G$ has no $p$-torsion, then the induced homomorphism + +$$H^*(BG; \mathbb{Z}/p) \to H^*(BT; \mathbb{Z}/p)$$ + +is a monomorphism. If $p$ is an odd prime, then the image of the above homomor- +phism is the ring of invariants of the Weyl group $W$, + +$$H^*(BG; \mathbb{Z}/p) = H^*(BT; \mathbb{Z}/p)^W = \mathbb{Z}/p[y_1, \dots, y_n]$$ + +where $\deg y_1 \cdots \deg y_n = 2^n|W|$ and $\deg y_1, \dots, \deg y_n$ are even. In particular, the +mod $p$ cohomology of BG is a polynomial algebra over $\mathbb{Z}/p$. + +There are compact Lie groups $G$ with $p$-torsion. The following is the list of $p$- +torsions of simply connected, simple compact Lie groups. \ No newline at end of file diff --git a/samples/texts/7675318/page_3.md b/samples/texts/7675318/page_3.md new file mode 100644 index 0000000000000000000000000000000000000000..8dd73b8dd7e7139c0d48cf9b8ec23eb71a4b51c7 --- /dev/null +++ b/samples/texts/7675318/page_3.md @@ -0,0 +1,31 @@ +
Lie group2-torsion3-torsion5-torsionp-torsion (p > 5)
SU(n)xxxx
Sp(n)xxxx
Spin(n)xxx
G2xxx
F4xx
E6xx
E7xx
E8x
+ +Most of the mod $p$ cohomology theories of classifying spaces of the above Lie groups as graded $\mathbb{Z}/p$-modules are computed. Only the cases $p = 2$, $G = E_8$ and $p = 3$, $G = E_8$ remain unsolved, although some of details are not yet in the literature. + +There are important examples of connected compact Lie groups with $p$-torsion. Among those are the projective classical groups, such as the projective unitary group $PU(n)$ which is the quotient of the unitary group $U(n)$ by its center $S^1$. The ordinary cohomology theories of projective classical groups seem to be difficult to compute. Among the nontrivial cases, only the cases $p = 2$, $G = PO(4n + 2)$, $PU(4n + 2)$, $Sp(4n + 2)$ and $p = 3$, $G = PU(3)$ were computed in the 20th century (See [3], [4], [5]). The case $p > 3$, $G = PU(p)$ are computed in [2], [11], recently. The computation of $H^*(PU(p^n); \mathbb{Z}/p)$ ($n \ge 2$) seems to be difficult. + +When $G$ has $p$-torsion, there are odd degree elements in the mod $p$ cohomology of $BG$, so that the induced homomorphism + +$$H^*(BG; \mathbb{Z}_p) \to H^*(BT; \mathbb{Z}/p)$$ + +is no longer a monomorphism. Replacing the maximal torus by elementary abelian $p$-subgroups $A$'s, Quillen proved the following theorem. + +**Theorem 2.2.** There is an *F*-isomorphism + +$$H^*(BG;\mathbb{Z}/p) \to \lim_{\leftarrow} H^*(BA;\mathbb{Z}/p).$$ + +*F*-isomorphism implies that a power of $x \in \lim H^*(BA;\mathbb{Z}/p)$ is in the image of this homomorphism and each element in the kernel of this homomorphism is nilpotent. + +The cohomology of elementary abelian $p$-subgroups is not only useful in the computation of the mod $p$ cohomology of classifying spaces but also important to our understanding it. So, we hope the following conjecture to be true. + +**Conjecture 2.3.** For $p > 2$, the induced homomorphism + +$$H^*(BG;\mathbb{Z}/p) \to \prod_A H^*(BA;\mathbb{Z}/p)$$ + +is a monomorphism where $A$ ranges over the conjugacy classes of elementary abelian $p$-subgroups of $G$. + +In the case $p = 2$, the above conjecture does not hold. See Kono-Yagita [6]. + +### 3. MORAVA K-THEORY AND BROWN-PETERSON COHOMOLOGY THEORY + +The computation of generalized cohomology theories is easy when $G$ has no $p$-torsion and the coefficient ring of the generalized cohomology theory has no odd \ No newline at end of file diff --git a/samples/texts/7675318/page_4.md b/samples/texts/7675318/page_4.md new file mode 100644 index 0000000000000000000000000000000000000000..d4856d6e1ae93268ad560e774d6f10ffee354c60 --- /dev/null +++ b/samples/texts/7675318/page_4.md @@ -0,0 +1,47 @@ +degree elements. Since the ordinary cohomology of $BG$ has no odd degree elements, the $E_2$-term + +$$E_2^{p,q} = H^p(BG; E^q)$$ + +of the Atiyah-Hirzeburch spectral sequence converging to the generalized cohomology theory $E^*(BG)$ collapses at the $E_2$-level and we have the following proposition. + +**Proposition 3.1.** *The Brown-Peterson cohomology $BP^*(BG)$ is isomorphic to* + +$$BP^* \otimes_{\mathbb{Z}(p)} H^*(BG; \mathbb{Z}(p)).$$ + +*The Morava K-theory $K(n)^*(BG)$ is isomorphic to* + +$$K(n)^* \otimes_{\mathbb{Z}/p} H^*(BG; \mathbb{Z}/p).$$ + +If $G$ has $p$-torsion, then the mod $p$ cohomology has an odd degree nonzero element. Our results seem to support the following conjecture. + +**Conjecture 3.2.** *The Brown-Peterson cohomology of the classifying space of a connected compact Lie group has no odd degree elements.* + +Many mathematicians believed that this conjecture should be true for all compact Lie groups including all finite groups. But a counterexample was constructed by Kriz in [7], [8]. We do not know any counterexample in the case of connected compact Lie groups. + +There is a generalized cohomology theory $P(n)$ for $n \ge 0$. $P(0)$ is $BP$ or its $p$-completion. We compute the $P(n)$-cohomology in order to compute the Morava $K$-theories. + +**Theorem 3.3.** *If the induced homomorphism* + +$$\rho: BP^*(X) \to P(n)^*(X)$$ + +is an epimorphism for all $n \ge 0$, then the following hold: for all $n \ge 0$, + +$$P(n)^*(BG) \cong P(n)^* \otimes_{BP^*} BP^*(BG)$$ + +and + +$$K(n)^*(BG) \cong K(n)^* \otimes_{BP^*} BP^*(BG).$$ + +So, if we would like to show that + +$$K(n)^*(X) \cong K(n) \otimes_{BP^*} BP^*(X)$$ + +for all $n$, it suffices to show that + +$$BP^*(X) \to P(n)^*(X)$$ + +is an epimorphism for all $n$. + +In the case $p=2$, we have the following results for simply connected simple Lie groups. + +
(p = 2)GBPMorava K
G2
F4??
E6
E7????
E8????
\ No newline at end of file diff --git a/samples/texts/7675318/page_7.md b/samples/texts/7675318/page_7.md new file mode 100644 index 0000000000000000000000000000000000000000..9ef1be385197a5d8c519bdb5bc3a08c977953bf7 --- /dev/null +++ b/samples/texts/7675318/page_7.md @@ -0,0 +1,27 @@ +where the index indicates the degree. There is an elementary abelian 2-subgroup $A$ of rank 5 in $F_4 \subset E_6$. The induced homomorphism + +$$H^*(BE_4; \mathbb{Z}/2) \to H^*(BA; \mathbb{Z}/2)$$ + +is not a monomorphism. The image of this induced homomorphism is isomorphic to + +$$H^*(BG_2; \mathbb{Z}/2) \otimes \mathbb{Z}/2[y_{16}^2, y_{24}^2]$$ + +and the action of Milnor operations on $y_{16}^2, y_{24}^2$ is trivial. The kernel of this induced homomorphism is the ideal generated by $y_{10}, y_{18}, y_{34}$ and it has no odd degree elements. So, by investigating the long exact sequence of Ext groups induced by the short exact sequence + +$$0 \to (y_{10}, y_{18}, y_{34}) \to H^*(BE_6; \mathbb{Z}/2) \to H^*(BG_2; \mathbb{Z}/2) \otimes \mathbb{Z}/2[y_{16}^2, y_{24}^2] \to 0$$ + +of $\mathcal{E}_n$-modules, we have the collapsing of the Adams spectral sequence. + +## 5. BEYOND CONNECTED COMPACT LIE GROUPS + +There seem to be several directions to extend the homotopy theory of classifying spaces of connected compact Lie groups. One of them is the study of the cohomology of finite Chevalley groups and free loop spaces of classifying spaces. I would like to end this talk with this. + +For a connected compact Lie group $G$, there is a complexification $G(\mathbb{C})$. $G(\mathbb{C})$ is a connected reductive complex algebraic group and there is an reductive integral group scheme $G_Z$ such that $G_Z(\mathbb{C}) = G(\mathbb{C})$ as an algebraic group. Replacing $\mathbb{C}$ by $\mathbb{F}_q$, we have finite Chevalley group $G(\mathbb{F}_q)$. By Friedlander-Quillen theory, if $q$ is a power of $p$ and if $\ell$ is a prime number not equal to $p$, then the mod $\ell$ cohomology of the finite Chevalley group is isomorphic to the mod $\ell$ cohomology of the following pull-back $F_{\phi_q}$: + +where $\phi^q$ is the Frobenius map and $BG^\wedge$ is the Bousfield-Kan $\mathbb{Z}/\ell$-completion. If $q-1 \equiv 0 \pmod{\ell}$, the induced homomorphism + +$$1 - \phi^{q*}: H^*(BG^\wedge; \mathbb{Z}/\ell) \to H^*(BG^\wedge; \mathbb{Z}/\ell)$$ + +is zero and the Eilenberg-Moore spectral sequence for $H^*(F_{\phi q}; \mathbb{Z}/\ell)$ has the same $E_2$-term with the Eilenberg-Moore spectral sequence for $H^*(LBG; \mathbb{Z}/\ell)$ where $LBG$ is the free loop space of $BG$ and it is the pull-back of the following diagram. + +If $G$ has no $\ell$-torsion, both Eilenberg-Moore spectral sequences collapse at the $E_2$-level and we have the same mod $\ell$ cohomology for finite Chevalley groups $BG(\mathbb{F}_q)$ and free loop space $LBG$. Thus, we have the same mod $\ell$ cohomology. \ No newline at end of file diff --git a/samples/texts/7675318/page_8.md b/samples/texts/7675318/page_8.md new file mode 100644 index 0000000000000000000000000000000000000000..0a00bcad7c9b0ef9c102af99cbf89be9e27692a6 --- /dev/null +++ b/samples/texts/7675318/page_8.md @@ -0,0 +1,38 @@ +On the other hand, the induced homomorphism + +$$1 - \phi^{q*} : K(n)^*(BG^{\wedge}) \to K(n)^*(BG^{\wedge})$$ + +is not zero, wheer $K(n)^* = \mathbb{Z}/\ell[v_n, v_n^{-1}]$. So the $E_2$-term of the Eilenberg-Moore spectral sequence for $K(n)^*(BG(\mathbb{F}_q))$ differs from the one for $K(n)^*(LBG)$. Again, when $G$ has no $\ell$-torsion, we have a satisfactory answer for Morava $K$-theories of finite Chevalley groups. + +**Theorem 5.1** (Tanabe [10]). If $G$ has no $\ell$-torsion, then $K(n)^*(BG(\mathbb{F}_q))$ has no odd degree elements. + +When $G$ has an $\ell$-torsion, we know little on the mod $\ell$ cohomology, Brown-Peterson cohomology and Morava $K$-theories of classifying spaces of finite Chevalley groups $G(\mathbb{F}_q)$ and the free loop space $LBG$. + +## REFERENCES + +[1] K. Inoue and N. Yagita, The complex cobordism of $BSO_n$, preprint. + +[2] M. Kameko and N. Yagita, The Brown-Peterson cohomology of the classifying spaces of the projective unitary groups $PU(p)$ and exceptional Lie groups, to appear in Trans. Amer. Math. Soc. + +[3] A. Kono and M. Mimura, On the cohomology of the classifying spaces of PSU$(4n + 2)$ and PO$(4n + 2)$, Publ. Res. Inst. Math. Sci. **10** (1974/75), no. 3, 691–720. + +[4] A. Kono and M. Mimura, Cohomology mod 2 of the classifying space of PSp$(4n + 2)$, Publ. Res. Inst. Math. Sci. **11** (1975/76), no. 2, 535–550. + +[5] A. Kono, M. Mimura and N. Shimada, Cohomology of classifying spaces of certain associative $H$-spaces, J. Math. Kyoto Univ. **15** (1975), no. 3, 607–617. + +[6] A. Kono and N. Yagita, Brown-Peterson and ordinary cohomology theories of classifying spaces for compact Lie groups, Trans. Amer. Math. Soc. **339** (1993), no. 2, 781–798. + +[7] Kriz, Igor. Morava $K$-theory of classifying spaces: some calculations. Topology 36 (1997), no. 6, 1247–1273. + +[8] I. Kriz and K. P. Lee, Odd-degree elements in the Morava $K(n)$ cohomology of finite groups, Topology Appl. **103** (2000), no. 3, 229–241. + +[9] M. Mimura and H. Toda, *Topology of Lie groups*. I, II, Translated from the 1978 Japanese edition by the authors, Amer. Math. Soc., Providence, RI, 1991. + +[10] M. Tanabe, On Morava $K$-theories of Chevalley groups, Amer. J. Math. **117** (1995), no. 1, 263–278. + +[11] A. Vistoli, On the cohomology and the Chow ring of the classifying space of $PGL_p$. preprint, math.AG/0505052. + +[12] W. S. Wilson, The complex cobordism of $BO_n$, J. London Math. Soc. (2) **29** (1984), no. 2, 352–366. + +TOYAMA UNIVERSITY OF INTERNATIONAL STUDIES, TOYAMA 930-1292, JAPAN +E-mail address: kameko@tuins.ac.jp \ No newline at end of file diff --git a/samples/texts/7675318/page_9.md b/samples/texts/7675318/page_9.md new file mode 100644 index 0000000000000000000000000000000000000000..293482a187f5fc1b002784fcd167e4a98589dad7 --- /dev/null +++ b/samples/texts/7675318/page_9.md @@ -0,0 +1 @@ +Twisted K-theory of compact Lie groups has been studied by physicists (see e.g. [Br04] [MMS01] [GG04]) as well as by mathematicians, starting with Douglas [Do06]. The computation of the twisted K-groups was extended to Lie groups are not necessarily compact simple and simply con-ned in [GG04][MR17]. The results for twisted K-theory KËšpG, hq, for arbitrary choices of the twist h are already rather complicated and hard to understand. The twisted Morava K-homology of all groups in the Whitehead tower of the orthogonal and unitary groups, and their classifying spaces, with the canonical twist, is isomorphic to the underlying untwisted Morava K-homology. This seems to be an instance of a more general phenomenon, and it does occur even in the case of K-homology. In particular, we discuss the classifying spaces BG that are p“compact for all primes when the groups are certain subgroups of simple Lie groups. A survey of the p“compactness of BG for a single prime is included. 55R35; 55P15, 55P60. A p“compact group (see Dwyer“Wilkerson [8]) is a loop space X such that X is Fp “nite and that its classifying space BX is Fp “complete (see Andersen“Grodal“ MÃller“Viruel [2] and Dwyer“Wilkerson [11]). We recall that the p“completion of a compact Lie group G is a p“compact group if ï€0(G) is a p“group. Next, if C(I) denotes the centralizer of a group homomorphism MORAVA K-THEORY OF EILENBERG-MAC LANE SPACES ERIC PETERSON This talk is about a 1980s computation by Ravenel and Wilson of the Morava K-theories of certain EilenbergMac Lane spaces. The most basic situation is q = 1, so that we—re studying the classifying space H Z/p j = BZ/ p j . Keep in mind that really the only things we know about Morava K-theory are: (1) Its coefficient ring is Kâ’- = F pn [vn±], which happens to be a graded field. [p] (2) We also, understand its “p-series,†which is the action of the map CPâ’ž â’→ CPâ’ž on cohomology. Facts about these objects are best-organized by a tool called “Dieudonné theory,†which is an analogue of a theory of Lie algebras for p-divisible groups, but I can—t go into that now. \ No newline at end of file diff --git a/samples/texts/7856234/page_1.md b/samples/texts/7856234/page_1.md new file mode 100644 index 0000000000000000000000000000000000000000..78de13a6c9a2c2858e765eb3d6d9e83fd10a6763 --- /dev/null +++ b/samples/texts/7856234/page_1.md @@ -0,0 +1,18 @@ +INCEPTION POINT AND AIR-WATER FLOW CHARACTERISTICS +OVER STEPPED SPILLWAY: NUMERICAL STUDY + +LE POINT D'INCEPTION ET LES CARACTERISTIQUES DE +L'ECOULEMENT EAU-AIR SUR LES DEVERSOIRS EN MARCHE +ESCALIER : ETUDE NUMERIQUE + +BENTALHA C., HABI M. + +Department of Hydraulic Engineering, Abou Bekr Belkaid University, Tlemcen, Algeria + +c_bentalha@yahoo.fr + +ABSTRACT + +Stepped spillway is hydraulic structure designed to dissipate considerable kinetic energy because it's characterised by the large value of the surface roughness. On stepped chutes with skimming flow regime, the flow is highly aerated which can reduce the risk of cavitation. The air entrainment starts where the boundary layer attains the free surface of flow; this point is called “point of inception”. In the present numerical study, the Reynolds-Averaged Navier-Stokes equations are coupled with k-ε turbulence standard model to simulate the water flow over stepped spillway by using the software Ansys Fluent. Volume of fluid (VOF) model is used as a tool to track the free surface flow. This research aims to find new formulas for describe the variation of water depth and the positions of the inception point. In addition, to study the characteristics of flow over stepped spillways. The found numerical results agree well with experimental results. + +**Keywords:** Ansys Fluent, VOF Model, Air entrainment, Stepped Spillway, Standard k-ε Model, \ No newline at end of file diff --git a/samples/texts/7856234/page_10.md b/samples/texts/7856234/page_10.md new file mode 100644 index 0000000000000000000000000000000000000000..e37c4b630db373a819375746fbb9a325eaadc2f9 --- /dev/null +++ b/samples/texts/7856234/page_10.md @@ -0,0 +1,11 @@ +$$ \frac{\delta}{L} = 0.15 \left( \frac{L}{ks} \right)^{-0.37} \qquad (1) $$ + +where $\delta$ is the boundary layer thickness, $L$ is the streamwise distance from the start of the growth of the boundary layer, $k_s$ is the roughness height. + +Most experimental research has been developed to characterise the flow over stepped spillway. Today, with the use of high-performance computers and more efficient computational fluid dynamics (CFD) codes, the flow over spillways can be investigated numerically (Table 1) in reasonable time and with reasonable expense. + +In this study, the water surface profile was compared by critical depth to determine the regime of gradually varied flow. This paper aims to develop new relationships for determining the variation of water level upstream of the inception point and the distance from the spillway crest to this point, at the same time the contour map of stream function, vorticity and shear strain rate are presented. The simulation results were compared with the experimental data of Zhang and Chanson (2015, 2016). + +**Table 1: Detailed numerical investigations of air-water flow in stepped chutes.** + +
studysoftwareTurbulence modelChute slopeSimulation Results
Chen et al (2002)NAk-ε53.13°Free surface
Distribution of velocity and pressure
Bombardelli et al. (2011)3D-FlowRNG k-ε
LES
53.13°Water depth profile
Boundary layer thickness, velocity and kinetic energy
Air entrainment
Cheng et al. (2006)FluentRNG k-ε53.13°Distribution of velocity and pressure
Chinnarasri et al (2012)Fluentrealisable k-ε26.56°Distribution of velocity and turbulence intensity
Van alwon et al (2017)ANSYS
Fluent v16.2
realisable k-ε45°Air entrainment and pressure
Mohammed et al. (2012)3D-FlowRNG k-ε
LES
50°Air entrainment
Velocity and pressure
Bentalha et al (2015,2017 and 2018)Fluent V 6.3k-ε14°Inception point location and air entrainment
22°Velocity, static pressure and cavitation
26°Inception point location and free surface
Present studyANSYSk-ε45°Stream function, turbulent viscosity and shear strain
Fluent
V 15.0
\ No newline at end of file diff --git a/samples/texts/7856234/page_11.md b/samples/texts/7856234/page_11.md new file mode 100644 index 0000000000000000000000000000000000000000..927625406d5e354a800ebbfe12c6c611b3c81d38 --- /dev/null +++ b/samples/texts/7856234/page_11.md @@ -0,0 +1,51 @@ +NUMERICAL MODEL + +Ansys Fluent computational fluid dynamics (2014) is used to solve the Reynolds-Averaged Navier-Stokes equations are based on momentum and mass conservation of multi-phase flow over stepped spillway. The standard k – ε turbulence model is adopted to enclose the equations. + +Continuity equation: + +$$ +\frac{\partial \rho}{\partial t} + \frac{\partial \rho u_i}{\partial x_i} = 0 \tag{2} +$$ + +Momentum equation: + +$$ +\frac{\partial \rho u_i}{\partial t} + \frac{\partial}{\partial x_j} (\rho u_i u_j) = -\frac{\partial p}{\partial x_i} + \rho g_i + \frac{\partial}{\partial x_j} \left\{ (\mu + \mu_t) \left( \frac{\partial u_i}{\partial x_j} + \frac{\partial u_j}{\partial x_i} \right) \right\} \quad (3) +$$ + +Turbulence kinetic energy equation (k): + +$$ +\frac{\partial}{\partial t}(\rho k) + \frac{\partial}{\partial x_i}(\rho k u_i) = \frac{\partial}{\partial x_j}\left[\left(\mu + \frac{\mu_t}{\sigma_k}\right)\frac{\partial k}{\partial x_j}\right] + G_k - \rho\epsilon \quad (4) +$$ + +Turbulence dissipation rate energy equation (ε): + +$$ +\frac{\partial}{\partial t}(\rho\epsilon) + \frac{\partial}{\partial x_i}(\rho\epsilon u_i) = \frac{\partial}{\partial x_j}\left[(\mu + \frac{\mu_t}{\sigma_\epsilon})\frac{\partial\epsilon}{\partial x_j}\right] + C_{\epsilon1}\frac{\epsilon}{k}G_k - C_{\epsilon2}\rho\frac{\epsilon^2}{k} \quad (5) +$$ + +Where, Gk is production of turbulent kinetic energy which can be given as + +$$ +G_k = \mu_t \left( \frac{\partial u_1}{\partial x_j} + \frac{\partial u_j}{\partial x_i} \right) \frac{\partial u_1}{\partial x_j} = 2 \mu_t S_{ij} \frac{\partial u_1}{\partial x_j} \quad (6) +$$ + +µt is the turbulent viscosity that satisfies + +$$ +\mu_t = \rho C_\mu \frac{k^2}{\varepsilon} \qquad (7) +$$ + +Cμ=0.09 is a constant determined experimentally; + +σk and σε are turbulence Prandtl numbers for k and ε equation respectively, σk +=1.0, σε=1.3, + +Cε1 and Cε2 are ε equation constants, Cε1=1.44, Cε2=1.92. + +The volume of fluid (VOF) method is applied to simulate the free surface +between water and air (Ansys Fluent 2014). In this approach, the tracking +interface between air and water is accomplished by the solution of a continuity +equation for the volume fraction of water: \ No newline at end of file diff --git a/samples/texts/7856234/page_12.md b/samples/texts/7856234/page_12.md new file mode 100644 index 0000000000000000000000000000000000000000..3077873d762a26c7182b02eb348e5277a957f287 --- /dev/null +++ b/samples/texts/7856234/page_12.md @@ -0,0 +1,25 @@ +$$ \frac{\partial \alpha_w}{\partial t} + \frac{\partial \alpha_w u_i}{\partial x_i} = 0; 0 \le \alpha_w \le 1 \qquad (8) $$ + +Where, $\alpha_w$ is volume fraction of water. + +In each cell, the sum of the volume fractions of air and water is unity. So, volume fractions of air denote $\alpha_a$ can be given as + +$$ \alpha_a = 1 - \alpha_w \qquad (9) $$ + +The geometry of numerical model and boundary conditions are shown in figure 2. The stepped spillway contains 12 identical steps, with 0.1 m height and 0.1 m length by step. The channel slope is $\theta = 45^\circ$. + +The two-dimensional numerical domain was divided into unstructured grids (triangular cell) that had a high adaptability to the complex geometry and boundary. Triangular meshes with 1 cm² are used. + +The boundary conditions in this study are velocity inlet as water inlet and air inlet, outlet as a pressure outlet type. All of the walls as a stationary, no-slip wall. The viscosity layer near to the wall dealt with the standard wall function. The boundary conditions for the turbulent quantities such as k and $\epsilon$ can be calculated from (Fluent 2014): + +$$ k = \frac{3}{2} (\mathbf{U}_{\text{avg}} I)^2 \qquad (10) $$ + +$$ \epsilon = C_u^{3/4} \frac{k^{3/2}}{0.07 D_H} \qquad (11) $$ + +Where, *I* is turbulence intensity can be estimated from the following formula derived from an empirical correlation for pipe flows: + +$$ I = 0.16(\mathrm{Re}_{DH})^{-1/8} \qquad (12) $$ + +$U_{\text{avg}}$ is the mean velocity of water flow inlet and $D_H$ is the hydraulic diameter. + +Figure 2: Boundary conditions and numerical model of a stepped spillway \ No newline at end of file diff --git a/samples/texts/7856234/page_13.md b/samples/texts/7856234/page_13.md new file mode 100644 index 0000000000000000000000000000000000000000..f992111522d1f2ac7d16dcde117468a75a887ce9 --- /dev/null +++ b/samples/texts/7856234/page_13.md @@ -0,0 +1,8 @@ +RESULTS AND DISCUSSION + +The results of the numerical simulation are compared with those obtained from +the physical model. In this study, the position of the inception point are +computed and compared with the experimental data by Zhang and Chanson +(2015) for 0.7 ≤ dc/h ≤ 1.7, where dc is critical flow depth and h is step height. + +Figure 3: Free surface obtained by Ansys Fluent \ No newline at end of file diff --git a/samples/texts/7856234/page_14.md b/samples/texts/7856234/page_14.md new file mode 100644 index 0000000000000000000000000000000000000000..a63ee0c28c33eea271206e608ba56a6e2ca7f960 --- /dev/null +++ b/samples/texts/7856234/page_14.md @@ -0,0 +1,9 @@ +Figure 3 compare the start of air entrainment obtained by numerical model and by experimental for different discharges. As can be seen from this figure, the calculated inception point is well agreed with that of measurement. At the inception point, the degree of turbulence was large enough to entrain air into the black water flow (Cheng et al, 2006), and then the volume fraction of water becomes less than unity (Bentalha and Habi, 2015). Boes and Hager (2003), suggest that, the mean air concentration at the inception point is approximately 0.22. This figure indicates also, that the inception point moves toward the basin floor when the discharge increases. Table 2 summarises the characteristics of the inception point of free-surface aeration found by Zhang and Chanson (2015), and by using Ansys Fluent. + +Table 2: Measured and computed inception point + +
dc/hZhang and Chanson (2016)Ansys FluentError (%)
Li (m)di (m)LCFD (m)dCFD (m)(Li - LCFD) / Li102(di - dCFD) / di102
0.90.570.0330.430.03524.566.06
10.570.0410.560.0431.574.87
1.10.57-0.560.0451.57
1.30.850.0490.780.0528.236.12
1.50.850.0610.910.0607.061.64
1.71.130.0621.160.0612.651.61
+ +**Figure 4: Free surface obtained by Ansys Fluent** + +Table 2 shows good agreement between the observed and numerical results. As can be seen from this table, the calculated water depth at the inception point is very close to the experimental value. Although the difference between the \ No newline at end of file diff --git a/samples/texts/7856234/page_15.md b/samples/texts/7856234/page_15.md new file mode 100644 index 0000000000000000000000000000000000000000..923a80318d6a164cca7ef3b7709c4b99fa0f0d11 --- /dev/null +++ b/samples/texts/7856234/page_15.md @@ -0,0 +1,5 @@ +numerical and experimental inception point locations ($L_i$ and $L_{CFD}$) are small (see also Fig. 3), except at low discharge the difference is slightly higher (24.56 %). This difference may be due to the VOF model which underestimates the value of air concentration (Afshin and Mitra 2012). + +Figure 4 shows the simulated water surface profile for $0.7 \le dc/h \le 1.7$, where $dc$ is critical flow depth and $h$ is step height along the stepped spillways with slopes $\theta = 45^\circ$. It is clear that water flow depth increases by increasing of flow rates. The water depth in steep stepped spillways is always less than critical depth it means that the regime flow is supercritical (see figure 5). + +Figure 5: Comparison among critical and water depth \ No newline at end of file diff --git a/samples/texts/7856234/page_2.md b/samples/texts/7856234/page_2.md new file mode 100644 index 0000000000000000000000000000000000000000..b3f5e2cbf08519ea17771f187f3b84012ffc3dfa --- /dev/null +++ b/samples/texts/7856234/page_2.md @@ -0,0 +1,15 @@ +The dimensionless water depth at step edge upstream the inception point obtained by simulation and measured by Zhang and Chanson (2016) are depicted in figure 6. A very satisfactory agreement can be observed between both results. This result is qualitatively similar to those presented by Bombardelli et al. (2011). The profile of normalized water surface elevation was best fitted by the following equations (see figure 6): + +$$ \frac{d}{L} = \left(\frac{dc}{h}\right)^{1.2} \left(\frac{L}{ks}\right)^{-1.28} \quad (13) $$ + +Where *d* is water depth; *L* is the streamwise distance from the spillway crest; *h* is step height; *ks* = *h* cos(θ) and *dc* is critical flow depth. + +The start of air entrainment is defined by the point where the boundary layer thickness reaches the water depth (δ ≈ d), so the distance from the start of growth of boundary layer to the inception point of air L_incp can be obtained by equality equation (13) and equation (1): + +$$ \frac{\delta}{L_{incp}} \approx \frac{d}{L_{incp}} \Rightarrow 0.15 \left( \frac{L_{incp}}{ks} \right)^{-0.37} \approx \left( \frac{d_c}{h} \right)^{1.2} \left( \frac{L_{incp}}{ks} \right)^{-1.28} $$ + +which can be rearranged to give + +$$ \frac{L_{incp}}{ks} \approx 8.0426 \left( \frac{dc}{h} \right)^{1.32} \quad (14) $$ + +Figure 6: Comparison between equation (13) and normalised water depth obtained by experimental (exp) and by Ansys Fluent (CFD) \ No newline at end of file diff --git a/samples/texts/7856234/page_3.md b/samples/texts/7856234/page_3.md new file mode 100644 index 0000000000000000000000000000000000000000..2cef78c7b61a8c6381a21b618863d41b4c293ea3 --- /dev/null +++ b/samples/texts/7856234/page_3.md @@ -0,0 +1,7 @@ +From figure 7, the agreement between the locations of inception point found by Zhang and Chanson (2016), by using Ansys Fluent and equation (14) is very good and it's applicable for chute slope equal to 45°. + +Figure 7: Comparison between dimensionless locations of inception point and the correlation + +Figure 8: Stream function along stepped spillways for $q=0.22\ m^2\ s^{-1}$ + +Figure 8 present the isolines of stream function along stepped spillway for $q=0.22\ m^2\ s^{-1}$. This figure shows the development of recirculating vortices in \ No newline at end of file diff --git a/samples/texts/7856234/page_4.md b/samples/texts/7856234/page_4.md new file mode 100644 index 0000000000000000000000000000000000000000..59dce1ad39cc4f7b966b4f00fa93de5926a1da7d --- /dev/null +++ b/samples/texts/7856234/page_4.md @@ -0,0 +1,7 @@ +step corner. Most of the energy is dissipated by momentum transfer between the skimming flow and the eddy in the interior of the step. + +Figure 9 display contour map of turbulent viscosity in stepped spillway for $q=0.22 \text{ m}^2\text{s}^{-1}$. It can be observed the increasing of turbulent viscosity along the stepped spillway which is the result of the development of recirculating vortices in step corner. + +**Figure 9: Contour map of turbulent viscosity for $q=0.22 \text{ m}^2\text{s}^{-1}$.** + +**Figure 10: Vorotcity along the stepped spillway for $q=0.22 \text{ m}^2\text{s}^{-1}$** \ No newline at end of file diff --git a/samples/texts/7856234/page_5.md b/samples/texts/7856234/page_5.md new file mode 100644 index 0000000000000000000000000000000000000000..b314c12f67028dbb644b33175a483e6ab83ca966 --- /dev/null +++ b/samples/texts/7856234/page_5.md @@ -0,0 +1,7 @@ +**Figure 11:** Shear strain rate along the stepped spillway for $q=0.22 \text{ m}^2 \text{ s}^{-1}$ + +Figure 10 and 11 present the isolines of shear strain rate contour $S_{ij}$ and magnitude of voritcity $\omega_z$ along stepped spillway for $q=0.22 \text{ m}^2 \text{ s}^{-1}$. As can be seen from these figures, the maximum of $\omega_z$ and $S_{ij}$ are located near the pseudo-bottom and is due to clockwise rotation. These results are qualitatively similar to those presented by Quian *et al.* (2009). + +## CONCLUSION + +In this study, flow over stepped spillway was simulated by using Ansys Fluent. Free surface was treated by VOF model and turbulence flow was estimated by k-ε Standard Model. According to the simulation results, the type of flow over stepped channel is supercritical and water depth increase by increasing of depth. Good agreement is found between numerical and experimental results. The calculated dimensionless water depth is well agreed with that of measurement. Two relationships are established (equation 13 and 14) for determining the variation of dimensionless water depth upstream of inception point and the location of the inception point for channel slope $\theta = 45^\circ$. The comparison of computed values with experimental data and thus formulas are well. Due to the interaction between skimming flow and the eddy, the stepped spillway can effectively dissipate the energy. The peak value of shear strain rate and vorticity are located near the pseudo-bottom and is due to clockwise rotation. \ No newline at end of file diff --git a/samples/texts/7856234/page_7.md b/samples/texts/7856234/page_7.md new file mode 100644 index 0000000000000000000000000000000000000000..5d8dbb3671aef792f6c5e4e93df066314fe39c53 --- /dev/null +++ b/samples/texts/7856234/page_7.md @@ -0,0 +1,13 @@ +MOHAMMAD S., JALAL A., Michael P. (2012). Numerical Computation of Inception Point Location for Steeply Sloping Stepped Spillways. 9th International Congress on Civil Engineering, Isfahan University of Technology (IUT), May 8-10, Isfahan, Iran + +QIAN Z., HUXIAO Q., HUAI W., AMADOR A. (2009). Numerical simulation and analysis of water flow over stepped spillways. Science in China Series E: Technological Sciences, Vol52 N°7, pp:1958-1965 + +RAJARATNAM N. (1990). Skimming flow in stepped spillways. Journal of Hydraulic Engineering, ASCE Vol 116,n°4, pp.587-591. + +VAN ALWON, J., BORMAN, D., SLEIGH, A., NIK K (2017) Experimental and numerical modelling of aerated flows over stepped spillways. In Proceedings of 37th IAHR World Congress, August 13-18 Kuala Lumpur, Malaysia + +WOOD I.R., ACKERS P., LOVELESS J. 1983. General method for critical point on spillways. Journal of Hydraulic Engineering. Vol. 109. No. 2 p. 308–312. + +ZHANG G., CHANSON H. (2015). Hydraulics of the developing flow region of stepped cascades: an experimental investigation. Report CH97/15. Brisbane. School of Civil Engineering, The University of Queensland pp. 78. + +ZHANG G., CHANSON H. (2016). Hydraulics of the developing flow region of stepped spillways. I: Physical modeling and boundary layer development. Journal of Hydraulic Engineering. Vol. 142. N° 7: 04016015. \ No newline at end of file diff --git a/samples/texts/7856234/page_9.md b/samples/texts/7856234/page_9.md new file mode 100644 index 0000000000000000000000000000000000000000..f8c9aa24ea2756d083bfb98c0a2e07ff3e495dda --- /dev/null +++ b/samples/texts/7856234/page_9.md @@ -0,0 +1,5 @@ +**Figure 1: Position of the inception point in stepped spillway** + +In the skimming flow regime, air entrainment occurs when the turbulent boundary layer thickness coincides with the water depth (Chanson, 1997). This location is called the inception point (e.g. Figure 1). At the inception point upstream, the flow is smooth and glassy whereas at the downstream of the inception point the flow becomes uniform as the depth of the air-water mixture grows. This position is characterised by two parameters: $L_i$ and $d_i$ (e.g. Fig. 1). The first is the distance from the start of growth of the boundary layer to the point of inception and other is the depth at the point of inception. Knowledge of the position at the beginning of aeration in stepped channel is very important to determine the non-aerated zone, which is potentially prone to cavitation damage. The inception point of aeration of stepped spillways is placed further upstream than on smooth spillways. On smooth spillway, the position of the inception point is a function of the discharge and the roughness of the spillway. Wood et al. (1983) proposed an approach to estimate $L_i$ and $d_i$. On stepped spillway, the position of the inception point is function of the discharge, spillway roughness, step geometry and spillway geometry. Chanson (2001) developed a method to determine the flow properties at the start of air entrainment with slopes greater or equal than 22°. Boes and Hager (2003) also derived a mathematical formula enabling the determination of the distance between the start of the turbulent boundary layer and the inception point. On steep stepped channel, the water depth is less than critical depth and regime flow is supercritical. + +The boundary layer thickness ($\delta$) is an important element in determining the positions of inception point of free-surface aeration and it's defined as the perpendicular distance from the pseudo-bottom to where the velocity is 99% of the free-stream velocity. Zhang and Chanson (2016) defined relationships to determine the evolution of boundary layer thickness with slopes equal 45°: \ No newline at end of file