diff --git a/samples/texts/1085779/page_1.md b/samples/texts/1085779/page_1.md new file mode 100644 index 0000000000000000000000000000000000000000..4c41cb8cb8b64269f5171d337d57205e83356fdd --- /dev/null +++ b/samples/texts/1085779/page_1.md @@ -0,0 +1,31 @@ +The Hyder Series + +by Syed Shahabudeen + +july 2021 + +**Abstract** + +The Hyder Series is a generelized version of a special type of multi- +ple infinite series.In this paper, we will be looking at some main aspect +of this series in detail. + +# 1 Introduction + +Hyder Series is basically a generelized form of a special type of an infinite +series. It is defined as + +$$ +\mathcal{H}^q(\alpha_1, \alpha_2, \alpha_3, \dots, \alpha_k; p_1, p_2, p_3, \dots, p_k; \beta) = \sum_{m_1, m_2, m_3, \dots, m_k = 0}^{\infty} \frac{\prod_{1 \le i \le k} \alpha_i^{m_i}}{\left( \sum_{n=1}^{k} p_n m_n + \beta \right)^q} +$$ + +where $\sum_{m_1,m_2,m_3,\dots,m_k=0} = \sum_{m_1=0}^{\infty} \sum_{m_2=0}^{\infty} \sum_{m_3=0}^{\infty} \dots \sum_{m_k=0}^{\infty}$ + +In this paper we'll be looking at some special values, its respective proofs +and relation of hyder series to hypergeometric series. + +**1.1 Notations** + +The *q* in the Hyder Notation Stands for the power order of the series. +$p_1, p_2, ..., p_k$ are the coefficients of $m_1, m_2, ..., m_k$. +If a number is being repeated for *n* number of times in the first two slots \ No newline at end of file diff --git a/samples/texts/1085779/page_10.md b/samples/texts/1085779/page_10.md new file mode 100644 index 0000000000000000000000000000000000000000..65e969927991a39f567b291b510301f584f4eeb9 --- /dev/null +++ b/samples/texts/1085779/page_10.md @@ -0,0 +1,42 @@ +ratio and exponential constant) therefore + +$$ +\mathcal{H}\left(\frac{1}{\pi}, \frac{1}{\phi}, \frac{1}{e}; 2r(3); 1\right) = \pi\phi e \left( \frac{\tanh^{-1}\left(\frac{1}{\sqrt{\pi}}\right)}{\sqrt{\pi}(\pi - \phi)(\pi - e)} + \frac{\tanh^{-1}\left(\frac{1}{\sqrt{\phi}}\right)}{\sqrt{\phi}(\phi - \pi)(\phi - e)} + \frac{\tanh^{-1}\left(\frac{1}{\sqrt{e}}\right)}{\sqrt{e}(e - \pi)(e - \phi)} \right) +$$ + +**Theorem 3.** + +$$ +\mathcal{H}(a_{r(m)}; p_{r(m)}; \eta) = \frac{1}{\eta} {}_2F_1\left(m, \frac{\eta}{p}; \frac{\eta}{p} + 1; a\right); \quad a \in (0, 1), m \in \mathbb{N} +$$ + +where ${}_2F_1$ is a Hypergeometric Series[3] + +Proof. + +$$ +\begin{align*} +\mathcal{H}(a_{r(m)}; p_{r(m)}; \eta) &= \sum_{n_1,n_2,n_3,\ldots,n_m \ge 0} \frac{a^{n_1+n_2+\cdots+n_m}}{(pn_1+pn_2+\cdots+pn_m+\eta)} \\ +&= \frac{1}{a^{\frac{\eta-1}{p}}} \int_0^1 \sum_{n_1,n_2,n_3,\ldots,n_m \ge 0} (ax^p)^{n_1+n_2+n_3+\cdots+n_m+\frac{\eta-1}{p}} dx \\ +&= \int_0^1 \frac{x^{(\eta-1)}}{(1-ax^p)^m} dx \tag*{\text{(Let } t=x^p\text{)}} \\ +&= \frac{1}{p} \int_0^1 \frac{t^{\frac{\eta}{p}-1}}{(1-at)^m} dt +\end{align*} +$$ + +Since + +$$ +\frac{\Gamma(\beta)\Gamma(\gamma-\beta)}{\Gamma(\gamma)} {}_2F_1(\alpha, \beta; \gamma; z) = \int_0^1 t^{\beta-1}(1-t)^{\gamma-\beta-1}(1-zt)^{-\alpha}dt \quad (\text{Euler Integral}) +$$ + +Therefore + +$$ +\begin{gather*} +\frac{1}{p} \int_0^1 \frac{t^{\frac{\eta}{p}-1}}{(1-at)^m} dt = \frac{1}{p} \frac{\Gamma\left(\frac{\eta}{p}\right) \Gamma(1)}{\Gamma\left(\frac{\eta}{p}+1\right)} {}_2F_1\left(m, \frac{\eta}{p}; \frac{\eta}{p}+1; a\right) \\ +\Rightarrow \\ +\mathcal{H}(a_{r(m)}; p_{r(m)}; \eta) = \frac{1}{\eta} {}_2F_1\left(m, \frac{\eta}{p}; \frac{\eta}{p}+1; a\right) \tag{9} +\end{gather*} +$$ + +$\square$ \ No newline at end of file diff --git a/samples/texts/1085779/page_11.md b/samples/texts/1085779/page_11.md new file mode 100644 index 0000000000000000000000000000000000000000..4df184aa56c22ffae90591230cdd4996f49fc22d --- /dev/null +++ b/samples/texts/1085779/page_11.md @@ -0,0 +1,32 @@ +From the above theorem it is clear how the hyder series is related to the ${}_2F_1$ hypergeometric series. Now we will look forward to some special cases. + +**Corollary 3.1.** + +$$ \mathcal{H}\left(\left(\frac{1}{2}\right)_{r(m)} ; 1_{r(m)}; m\right) = 2^{m-1} \left(\psi\left(\frac{1+m}{2}\right) - \psi\left(\frac{m}{2}\right)\right); \quad m \in \mathbb{N} $$ + +*Proof.* + +$$ +\begin{align*} +\mathcal{H}\left(\left(\frac{1}{2}\right)_{r(m)} ; 1_{r(m)}; m\right) &= \frac{1}{m} {}_2F_1\left(m, m; m+1; \frac{1}{2}\right) && \text{(from theorem 2)} \\ +&= \frac{2^m}{m} {}_2F_1\left(m, 1; m+1; -1\right) && \text{(Apply Pfaff Transformation)} \\ +&= 2^m \int_0^1 \frac{t^{m-1}}{1+t} dt +\end{align*} +$$ + +the following integral can be easily proved in terms of digamma function [5] +,A beautiful proof of it's can also be seen in the book "Almost Impossible Integral and Sums"[1] at page 67. Therefore + +$$ \int_0^1 \frac{t^{m-1}}{1+t} dt = \frac{1}{2} \left( \psi\left(\frac{1+m}{2}\right) - \psi\left(\frac{m}{2}\right) \right) $$ + +Hence + +$$ \mathcal{H}\left(\left(\frac{1}{2}\right)_{r(m)} ; 1_{r(m)}; m\right) = 2^{m-1} \left( \psi\left(\frac{1+m}{2}\right) - \psi\left(\frac{m}{2}\right) \right) \quad (10) $$ + +□ + +By making use of the Series definition of the Digamma function, we can write the digamma relation in Eq (9) in terms of the Lerch Transcendent[4] Notation i.e + +$$ \mathcal{H}\left(\left(\frac{1}{2}\right)_{r(m)} ; 1_{r(m)}; m\right) = \Phi(-1, 1, m) $$ + +where $\Phi(-1, 1, m) = \sum_{k=0}^{\infty} \frac{(-1)^k}{m+k}$ \ No newline at end of file diff --git a/samples/texts/1085779/page_12.md b/samples/texts/1085779/page_12.md new file mode 100644 index 0000000000000000000000000000000000000000..92788633031cc32bb8181133e885e4b7451ac3cb --- /dev/null +++ b/samples/texts/1085779/page_12.md @@ -0,0 +1,33 @@ +**Example 3.** let us plug the values $m = 4$ in Eq(9) Therefore as per the definition it equals to + +$$ +\begin{aligned} +H\left(\left(\frac{1}{2}\right)_{r(4)} ; 1_{r(4)} ; 4\right) &= \sum_{x=0}^{\infty} \sum_{y=0}^{\infty} \sum_{z=0}^{\infty} \sum_{l=0}^{\infty} \frac{1}{2^{x+y+z+l}(x+y+z+l+4)} && \text{(Apply result from Eq(9))} \\ +&= 2^3 \left(\psi\left(\frac{5}{2}\right) - \psi(2)\right) +\end{aligned} +$$ + +Since $\psi(\frac{5}{2}) = \frac{8}{3} - \gamma - \ln(4)$ and $\psi(2) = 1 - \gamma$. (where $\gamma$ is Euler-Mascheroni[2] constant). + +therefore + +$$ H\left(\left(\frac{1}{2}\right)_{r(4)}; 1_{r(4)}; 4\right) = \frac{40}{3} - 16 \ln(2) $$ + +## 1.3 Hyder Series of Higher order + +The Hyder Series that we just encountered at the beginning of this paper were all just the order of 1 i.e $q = 1$. In this section we will be looking at some special cases of higher order of Hyder Series. + +**Lemma 2.** For $q, \beta \in \mathbb{N}$ + +$$ +\int_0^1 \frac{x^{\beta+1}}{\left(1-\frac{x}{a}\right)^2} \log^q(x) dx = a^{\beta+2} (-1)^q (q!) \left( \mathrm{Li}_q\left(\frac{1}{a}\right) - (\beta+1) \mathrm{Li}_{q+1}\left(\frac{1}{a}\right) + (\beta+1) \sum_{k=1}^{\beta} \frac{1}{a^k k^{q+1}} - \sum_{k=1}^{\beta} \frac{1}{a^k k^q} \right) +$$ + +where $\mathrm{Li}_s(z)$ is the polylogarithm function [6] + +*Proof.* + +$$ +\int_0^1 \frac{x^{\beta+1}}{\left(1-\frac{x}{a}\right)^2} \log^q(x) dx = a \int_0^1 \frac{\frac{x}{a}}{\left(1-\frac{x}{a}\right)^2} x^\beta \log^q(x) dx +\quad \square +$$ \ No newline at end of file diff --git a/samples/texts/1085779/page_13.md b/samples/texts/1085779/page_13.md new file mode 100644 index 0000000000000000000000000000000000000000..e53f659df3b2c4ed277dedfbb450b1a95b4d6341 --- /dev/null +++ b/samples/texts/1085779/page_13.md @@ -0,0 +1,38 @@ +The above mentioned Integral can be evaluated by using the series + +$$ \sum_{k=0}^{\infty} kx^k = \frac{x}{(1-x)^2} $$ + +In our's case $x$ equals $\frac{x}{a}$. Therefore we can write the Integral as + +$$ a \int_0^1 \frac{\frac{x}{a}}{(1-\frac{x}{a})^2} x^\beta \log^q(x) dx = a \sum_{k=0}^\infty \frac{k}{a^k} \int_0^1 x^{\beta+k} \log^q(x) dx $$ + +To evaluate the integral we'll make use of a well known result + +$$ \int_0^1 x^m \log^n(x) dx = \frac{(-1)^q q!}{(m+1)^{q+1}} \quad (11) $$ + +therefore + +$$ +\begin{aligned} +a \sum_{k=0}^{\infty} \frac{k}{a^k} \int_0^1 x^{\beta+k} \log^q(x) dx &= a(-1)^q (q!) \sum_{k=0}^{\infty} \frac{k}{a^k (\beta + k + 1)^{q+1}} \\ +&= a(-1)^q (q!) \left( \sum_{k=0}^{\infty} \frac{1}{a^k (\beta + k + 1)^q} - (\beta + 1) \sum_{k=0}^{\infty} \frac{1}{a^k (\beta + k + 1)^{q+1}} \right) \\ +&= a(-1)^q (q!) \left( \Phi\left(\frac{1}{a}, q, \beta+1\right) - (\beta+1)\Phi\left(\frac{1}{a}, q+1, \beta+1\right) \right) +\end{aligned} +$$ + +On writing the Lerch Transdescent in terms of Polylogarithm function we get + +$$ \Phi\left(\frac{1}{a}, q, \beta+1\right) = a^{\beta+1} \left( \mathrm{Li}_q\left(\frac{1}{a}\right) - \sum_{k=1}^{\beta} \frac{1}{a^k k^q} \right) $$ + +and + +$$ \Phi\left(\frac{1}{a}, q+1, \beta+1\right) = a^{\beta+1} \left( \mathrm{Li}_{q+1}\left(\frac{1}{a}\right) - \sum_{k=1}^{\beta} \frac{1}{a^k k^{q+1}} \right) $$ + +finally upon substitution, we will get the result as + +$$ +\begin{aligned} +\int_0^1 \frac{x^{\beta+1}}{\left(1-\frac{x}{a}\right)^2} \log^q(x) dx = {}& a^{\beta+2} (-1)^q (q!) \left( \mathrm{Li}_q\left(\frac{1}{a}\right) - (\beta+1) \mathrm{Li}_{q+1}\left(\frac{1}{a}\right) + (\beta+1) \sum_{k=1}^{\beta} \frac{1}{a^k k^{q+1}} \right. \\ +& \left. - \sum_{k=1}^{\beta} \frac{1}{a^k k^q} \right) +\end{aligned} +$$ \ No newline at end of file diff --git a/samples/texts/1085779/page_14.md b/samples/texts/1085779/page_14.md new file mode 100644 index 0000000000000000000000000000000000000000..6929ea80692887317d227a32ad0ab126cdf0e480 --- /dev/null +++ b/samples/texts/1085779/page_14.md @@ -0,0 +1,30 @@ +**Theorem 4.** + +$$ +\mathcal{H}^{q+1} \left( \left(\frac{1}{a}\right)_{r(2)} ; 1_{r(2)}; \beta + 2 \right) = a^{\beta+2} \left( \mathrm{Li}_q \left(\frac{1}{a}\right) - (\beta+1) \mathrm{Li}_{q+1} \left(\frac{1}{a}\right) + (\beta+1) \sum_{k=1}^{\beta} \frac{1}{a^k k^{q+1}} - \sum_{k=1}^{\beta} \frac{1}{a^k k^q} \right) +$$ + +Proof. As per the definition of Hyder Series + +$$ +\mathcal{H}^{q+1} \left( \left( \frac{1}{a} \right)_{r(2)} ; 1_{r(2)} ; \beta + 2 \right) = \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \frac{1}{a^{k_1+k_2}(k_1+k_2+\beta+2)^{q+1}} +$$ + +To evaluate the series, we will make use of the Eq(11), In this case $m = k_1 + k_2 + \beta + 1$ + +therefore + +$$ +\begin{align*} +\mathcal{H}^{q+1} \left( \left(\frac{1}{a}\right)_{r(2)} ; 1_{r(2)} ; \beta + 2 \right) &= \frac{(-1)^q}{q!} \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \frac{1}{a^{k_1+k_2}} \int_0^1 x^{k_1+k_2+\beta+1} \log^q(x) dx \\ +&= \frac{(-1)^q}{q!} \int_0^1 x^{\beta+1} \log^q(x) \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \frac{x^{k_1+k_2}}{a^{k_1+k_2}} dx \\ +&= \frac{(-1)^q}{q!} \underbrace{\int_0^1 \frac{x^{\beta+1}}{\left(1-\frac{x}{a}\right)^2} \log^q(x) dx}_{A} +\end{align*} +$$ + +The above mentioned Integral *A* is the same Integral that we just came to +prove in **Lemma 2**. Therefore the final result becomes + +$$ +\mathcal{H}^{q+1} \left( \left(\frac{1}{a}\right)_{r(2)} ; 1_{r(2)} ; \beta + 2 \right) = a^{\beta+2} \left( \operatorname{Li}_q \left(\frac{1}{a}\right) - (\beta+1) \operatorname{Li}_{q+1} \left(\frac{1}{a}\right) + (\beta+1) \sum_{k=1}^{\beta} \frac{1}{a^k k^{q+1}} - \sum_{k=1}^{\beta} \frac{1}{a^k k^q} \right) +$$ \ No newline at end of file diff --git a/samples/texts/1085779/page_15.md b/samples/texts/1085779/page_15.md new file mode 100644 index 0000000000000000000000000000000000000000..2033213a78a4f159a04a4303d3cef4ac528c1e5d --- /dev/null +++ b/samples/texts/1085779/page_15.md @@ -0,0 +1,39 @@ +**Example 4.** For $q = 2, \beta = 2$ and $a = 2$ in **Theorem 4**, from definition we have + +$$ +\begin{aligned} +\mathcal{H}^3 \left( \left(\frac{1}{2}\right)_{r(2)} ; 1_{r(2)} ; 4 \right) &= \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \frac{1}{2^{k_1+k_2}(k_1+k_2+4)^3} \\ +&= 2^4 \left( \mathrm{Li}_2\left(\frac{1}{2}\right) - 3 \mathrm{Li}_3\left(\frac{1}{2}\right) + 3 \sum_{k=1}^{2} \frac{1}{2^k k^3} - \sum_{k=1}^{2} \frac{1}{2^k k^2} \right) +\end{aligned} +$$ + +here + +$$ \mathrm{Li}_2\left(\frac{1}{2}\right) = \frac{\pi^2}{12} - \frac{\log^2(2)}{2} $$ + +and + +$$ \mathrm{Li}_3\left(\frac{1}{2}\right) = \frac{\log^3(2)}{6} + \frac{7\zeta(3)}{8} - \frac{\pi^2 \log(2)}{12} $$ + +therefore + +$$ \mathcal{H}^3 \left( \left(\frac{1}{2}\right)_{r(2)} ; 1_{r(2)} ; 4 \right) = \frac{4\pi^2}{3} + 4\pi^2 \log(2) + \frac{33}{2} - 8\log^3(2) - 8\log^2(2) - 42\zeta(3) $$ + +**Corollary 4.1.** The following equation hold's true for the case when $q \ge 2$ + +$$ \mathcal{H}^{q+1}(1_{r(2)}; 1_{r(2)}; 2) = \zeta(q) - \zeta(q+1) \quad (12) $$ + +where $\zeta(q)$ is the zeta function[7] + +*Proof.* + +$$ +\begin{aligned} +\mathcal{H}^{q+1}(1_{r(2)}; 1_{r(2)}; 2) &= \sum_{k_1=0}^{\infty} \sum_{k_2=0}^{\infty} \frac{1}{(k_1 + k_2 + 2)^{q+1}} && (\text{Apply Theorem 4}) \\ +&= \mathrm{Li}_q(1) - \mathrm{Li}_{q+1}(1) +\end{aligned} +$$ + +Since $\mathrm{Li}_q(1) = \zeta(q)$, therefore + +$$ \mathcal{H}^{q+1}(1_{r(2)}; 1_{r(2)}; 2) = \zeta(q) - \zeta(q+1) $$ \ No newline at end of file diff --git a/samples/texts/1085779/page_16.md b/samples/texts/1085779/page_16.md new file mode 100644 index 0000000000000000000000000000000000000000..be1e56e7e28cd1185c1a25138512a6142b310ec8 --- /dev/null +++ b/samples/texts/1085779/page_16.md @@ -0,0 +1,31 @@ +**Example 5.** Here are some examples for the above case + +$$ +\begin{aligned} +\mathcal{H}^3 (1_{r(2)}; 1_{r(2)}; 2) &= \zeta(2) - \zeta(3) \\ +&= \frac{\pi^2}{6} - \zeta(3) +\end{aligned} + $$ + +$$ +\begin{aligned} +\mathcal{H}^4 (1_{r(2)}; 1_{r(2)}; 3) &= \zeta(3) - \zeta(4) \\ +&= \zeta(3) - \frac{\pi^4}{90} +\end{aligned} + $$ + +## 2 Conclusion + +This paper was just an introduction to Hyder Series. We came to see some important results and some special cases of Hyder series of higher order. This series is named in honour of my late Grandfather Syed Hyder, who was a chief executive engineer in the water department of state Tamil Nadu. He enjoyed solving mathematical problems during his leisure times and was a man of wit and humour. I hope this Series could have many more other interesting result to be discovered and could also have unique relation to some special functions. + +## References + +[1] Cornel Ioan Vălean. (Almost) impossible integrals, sums, and series. Springer, 2019. + +[2] Eric W Weisstein. Euler-mascheroni constant. https://mathworld.wolfram.com/COMS.html, 2002. + +[3] Eric W Weisstein. Hypergeometric function. https://mathworld.wolfram.com/COMS.html, 2002. + +[4] Eric W Weisstein. Lerch transcendent. https://mathworld.wolfram.com/COMS.html, 2002. + +[5] Eric W Weisstein. Polygamma function. https://mathworld.wolfram.com/COMS.html, 2002. \ No newline at end of file diff --git a/samples/texts/1085779/page_17.md b/samples/texts/1085779/page_17.md new file mode 100644 index 0000000000000000000000000000000000000000..a6ad25021a099dc3bd6bb026880768c1a7b71635 --- /dev/null +++ b/samples/texts/1085779/page_17.md @@ -0,0 +1,17 @@ +[6] Eric W Weisstein. Polylogarithm. https://mathworld.wolfram.com/, 2002. + +[7] Eric W Weisstein. Riemann zeta function. https://mathworld.wolfram.com/, 2002. + +[8] Eric W Weisstein. Rising factorial. https://mathworld.wolfram.com/, 2003. + +[9] Wikipedia contributors". General Leibniz rule, 12 2020. + +*Romanian Mathematical Magazine* + +Web: http://www.ssmrmh.ro + +The Author: This article is published with open access + +Warriorshahab@gmail.com +Kmea Engineering College +Ernakulam, Kerala-India \ No newline at end of file diff --git a/samples/texts/1085779/page_2.md b/samples/texts/1085779/page_2.md new file mode 100644 index 0000000000000000000000000000000000000000..72bd8ad86327446250123b032d54901a928dca4d --- /dev/null +++ b/samples/texts/1085779/page_2.md @@ -0,0 +1,22 @@ +of the Hyder Series, then it can be denoted as $\mathcal{H}^q(\alpha_r(n); p_r(n); \beta)$ where $r(n)$ stands for the $n$ number of times repeated. Such that as per the definition it equals + +$$ \mathcal{H}^q(\alpha_{r(n)}; p_{r(n)}; \beta) = \sum_{m_1, m_2, m_3, \dots, m_n = 0}^{\infty} \frac{\alpha^{m_1+m_2+m_3+\dots+m_n}}{\left( p \sum_{k=1}^{n} m_k + \beta \right)^q} $$ + +It has to be noted that the number of sums repeated is equal to the number of terms written on any one of the first two slots of hyder notation. + +## 1.2 Some Examples + +These are some examples for the case when $q = 1$ + +$$ \mathcal{H}\left(\frac{2}{3}, \frac{2}{3}; 2, 2; 1\right) = \frac{\sqrt{3}}{2\sqrt{2}} \tanh^{-1}\left(\sqrt{\frac{2}{3}}\right) + \frac{3}{2} \quad (1) $$ + +*Proof.* + +$$ +\begin{align*} +\mathcal{H}\left(\frac{2}{3}, \frac{2}{3}; 2, 2; 1\right) &= \sum_{m=0}^{\infty} \sum_{n=0}^{\infty} \frac{1}{\left(\frac{3}{2}\right)^{m+n} (2m + 2n + 1)} \\ +&= \sum_{m=0}^{\infty} \sum_{n=0}^{\infty} \int_0^1 \frac{x^{2m+2n}}{\left(\frac{3}{2}\right)^{m+n}} dx \\ +&= \int_0^1 \left( \sum_{m=0}^{\infty} \left(\frac{2x^2}{3}\right)^m \sum_{n=0}^{\infty} \left(\frac{2x^2}{3}\right)^n \right) dx \\ +&= \frac{3^2}{2^2} \underbrace{\int_0^1 \frac{1}{\left(\frac{3}{2}-x^2\right)^2} dx}_{I\left(\frac{3}{2}\right)} +\end{align*} +$$ \ No newline at end of file diff --git a/samples/texts/1085779/page_3.md b/samples/texts/1085779/page_3.md new file mode 100644 index 0000000000000000000000000000000000000000..81e5ca675b2d46445b3404ab8bc954876099748b --- /dev/null +++ b/samples/texts/1085779/page_3.md @@ -0,0 +1,35 @@ +Here + +$$ +\begin{align*} +I(a) &= \int_0^1 \frac{dx}{(a - x^2)^2} \\ +&= -\frac{\partial}{\partial a} \left( \frac{\tanh^{-1}\left(\frac{x}{\sqrt{a}}\right)}{\sqrt{a}} \right) \Bigg|_{x=0}^1 \\ +&= \frac{\tanh^{-1}\left(\frac{1}{\sqrt{a}}\right)}{2a\sqrt{a}} + \frac{1}{2a^2\left(1 - \frac{1}{a}\right)} +\end{align*} +$$ + +$\therefore I\left(\frac{3}{2}\right) = \frac{\sqrt{2} \tanh^{-1}\left(\frac{\sqrt{2}}{\sqrt{3}}\right)}{3\sqrt{3}} + \frac{2}{3}$ + +So + +$$ +\begin{align*} +H\left(\frac{2}{3}, \frac{2}{3}; 2, 2; 1\right) &= \frac{3^2}{2^2} \left( \frac{\sqrt{2} \tanh^{-1}\left(\frac{\sqrt{2}}{\sqrt{3}}\right)}{3\sqrt{3}} + \frac{2}{3} \right) \\ +&= \frac{\sqrt{3}}{2\sqrt{2}} \tanh^{-1}\left(\sqrt{\frac{2}{3}}\right) + \frac{3}{2} +\end{align*} +$$ + +$$ +\mathcal{H}\left(\left(\frac{1}{2}\right)_{r(3)}; 2_{r(3)}; 1\right) = \frac{3\tanh^{-1}\left(\frac{1}{\sqrt{2}}\right)}{4\sqrt{2}} + \frac{7}{4} \quad (2) +$$ + +Proof. + +$$ +\begin{align*} +\mathcal{H}\left(\left(\frac{1}{2}\right)_{r(3)}; 2_{r(3)}; 1\right) &= \sum_{m=0}^{\infty} \sum_{n=0}^{\infty} \sum_{p=0}^{\infty} \frac{1}{2^{m+n+p}(2m+2n+2p+1)} \\ +&= \sum_{m=0}^{\infty} \sum_{n=0}^{\infty} \sum_{p=0}^{\infty} \int_{0}^{1} \left(\frac{x^2}{2}\right)^{m+n+p} dx \\ +&= 2^3 \int_{0}^{1} \frac{dx}{(2-x^2)^3} \\ +&= \frac{3 \tanh^{-1}\left(\frac{1}{\sqrt{2}}\right)}{4\sqrt{2}} + \frac{7}{4} +\end{align*} +$$ \ No newline at end of file diff --git a/samples/texts/1085779/page_4.md b/samples/texts/1085779/page_4.md new file mode 100644 index 0000000000000000000000000000000000000000..5c536ee5320bd73e62aa2a9dfb67cb168f738456 --- /dev/null +++ b/samples/texts/1085779/page_4.md @@ -0,0 +1,25 @@ +**Theorem 1.** This Theorem is a generalised version of the above two mentioned examples, It holds true for $\alpha = \frac{1}{a}, p = 2$ and $q = 1$ + +$$ \mathcal{H}\left(\left(\frac{1}{a}\right)_{r(m+1)} ; 2r(m+1); 1\right) = \frac{1}{2} \left( \sum_{i=1}^{m} \binom{2m-2i}{m-i} \Omega(a,i) \right) + \frac{(2m)!}{2^{2m}(m!)^2 a^{m+\frac{1}{2}}} \tanh^{-1}\left(\frac{1}{\sqrt{a}}\right) $$ + +where + +$$ \Omega(a, i) = \frac{1}{i4^{m-i}(1-\frac{1}{a})^i} \sum_{k=0}^{i-1} \left(1-\frac{1}{a}\right)^k \left(\frac{1}{2}\right)_k, \quad a > 1, m \in \mathbb{N} $$ + +Before proceeding to the proof of theorem 1 we'll have to prove an important lemma + +**Lemma 1.** + +$$ \frac{\partial^i}{\partial a^i} \tanh^{-1} \left( \frac{x}{\sqrt{a}} \right) = \frac{x(-1)^i (i-1)!}{2\sqrt{a}(a-x^2)^i} \sum_{k=0}^{i-1} \frac{\left(1-\frac{x^2}{a}\right)^k}{k!} \left(\frac{1}{2}\right)_k ; \quad i \in \mathbb{N} \quad (3) $$ + +*Proof.* To find the 'i' th derivative of the given expression we'll make use of the Leibniz differentiation[9] formula which states that + +$$ (fg)^i = \sum_{k=0}^{i} \binom{i}{k} f^{(i-k)} g^{(k)} \quad (4) $$ + +where *f* and *g* are n times differentiable functions. On differentiating $\tanh^{-1}\left(\frac{x}{\sqrt{a}}\right)$ with respect to *a* we get + +$$ \frac{\partial}{\partial a} \tanh^{-1} \left( \frac{x}{\sqrt{a}} \right) = -\frac{x}{2\sqrt{a}(a-x^2)} $$ + +let us take $f = \frac{-x}{2(a-x^2)}$ and $g = \frac{1}{\sqrt{a}}$ + +therefore 'n'th differentiation of both functions with respect to a is \ No newline at end of file diff --git a/samples/texts/1085779/page_5.md b/samples/texts/1085779/page_5.md new file mode 100644 index 0000000000000000000000000000000000000000..67b67712d06e5b59c413cd615e073fbc59d4b4cc --- /dev/null +++ b/samples/texts/1085779/page_5.md @@ -0,0 +1,34 @@ +$$f^{(n)} = \frac{-x(-1)^n n!}{2(a-x^2)^{n+1}} \quad \text{and} \quad g^{(n)} = \frac{(-1)^n (2n-1)!!}{2^n a^{\frac{2n+1}{2}}}$$ + +on substituting the above values in eq 4 we'll get + +$$ (fg)^i = \frac{-x}{2} \sum_{k=0}^{i} \binom{i}{k} \frac{(-1)^i (i-k)! (2k-1)!!}{(a-x^2)^{i-k+1} 2^k a^{\frac{2k+1}{2}}} $$ + +since $i \ge 1$ we have + +$$ (fg)^i = \frac{-x}{2} \sum_{k=0}^{i-1} \binom{i-1}{k} \frac{(-1)^{i-1}(i-k-1)!(2k-1)!!}{(a-x^2)^{i-k} 2^k a^{\frac{2k+1}{2}}} $$ + +on rearranging and writing the double factorial in terms of +Pochammer symbol[8]. i.e $\frac{(2k-1)!!}{2^k} = \left(\frac{1}{2}\right)_k$ + +we'll get + +$$ (fg)^i = \frac{x (-1)^i (i-1)!}{2\sqrt{a}(a-x^2)^i} \sum_{k=0}^{i-1} \left(1-\frac{x^2}{a}\right)^k \left(\frac{1}{2}\right)_k $$ + +$$ \therefore (fg) = \frac{\partial}{\partial a} \tanh^{-1} \left( \frac{x}{\sqrt{a}} \right) $$ + +$$ \therefore \quad \frac{\partial^i}{\partial a^i} \tanh^{-1} \left( \frac{x}{\sqrt{a}} \right) = \frac{x (-1)^i (i-1)!}{2\sqrt{a}(a-x^2)^i} \sum_{k=0}^{i-1} \left(1 - \frac{x^2}{a}\right)^k \left(\frac{1}{2}\right)_k $$ + +□ + +**Proof of Theorem 1** + +*Proof.* + +$$ +\begin{aligned} +H\left(\left(\frac{1}{a}\right)_{r(m+1)} ; 2r(m+1); 1\right) &= \sum_{n_1, n_2, n_3, \ldots, n_{m+1} \ge 0} \frac{1}{a^{n_1+n_2+\cdots+n_{m+1}}} (2n_1 + 2n_2 + \cdots + 2n_{m+1} + 1) \\ +&= \int_0^1 \sum_{n_1, n_2, n_3, \ldots, n_{m+1} \ge 0} \left(\frac{x^2}{a}\right)^{n_1+n_2+\cdots+n_{m+1}} dx \\ +&= a^{m+1} \underbrace{\int_0^1 \frac{dx}{(a-x^2)^{m+1}}}_{I} +\end{aligned} +$$ \ No newline at end of file diff --git a/samples/texts/1085779/page_6.md b/samples/texts/1085779/page_6.md new file mode 100644 index 0000000000000000000000000000000000000000..c4f5b0da1ead180db5f1569e4edb77e8babb5272 --- /dev/null +++ b/samples/texts/1085779/page_6.md @@ -0,0 +1,52 @@ +here + +$$ +\begin{align*} +I &= \int_0^1 \frac{dx}{(a - x^2)^{m+1}} \\ +&= \frac{(-1)^m}{m!} \frac{\partial^m}{\partial a^m} \left( \frac{\tanh^{-1}\left(\frac{1}{\sqrt{a}}\right)}{\sqrt{a}} \right) +\end{align*} +$$ + +Now to differentiate the above expression we'll make use of Leibniz differen- +tiation formula + +$$ +(fg)^m = \sum_{i=0}^{m} \binom{m}{i} f^{(m-i)} g^{(i)} \quad (5) +$$ + +let us consider $f = \frac{1}{\sqrt{a}}$ and $g = \tanh^{-1} \frac{1}{\sqrt{a}}$ + +therefore its respective 'n'th Derivatives are $f^{(n)} = \frac{(-1)^n (2n)!}{2^{2n} n! a^{\frac{2n+1}{2}}}$ +and $g^{(n)} =$ + +$$ +\frac{(-1)^n (n-1)!}{2\sqrt{a}(a-x^2)^n} \sum_{k=0}^{n-1} \frac{\left(1-\frac{1}{a}\right)^k}{k!} \left(\frac{1}{2}\right)_k, (\text{from Lemma 1}) +$$ + +we can rewrite Eq (5) as + +$$ +(fg)^m = \sum_{i=1}^{m} \binom{m}{i} f^{(m-i)} g^{(i)} + f^{(m)} g \quad (6) +$$ + +$$ +\begin{equation} +\begin{split} +\frac{\partial^m}{\partial a^m} \left( \frac{\tanh^{-1}\left(\frac{1}{\sqrt{a}}\right)}{\sqrt{a}} \right) &= \frac{(-1)^m}{2a^{m+1}} \sum_{i=1}^{m} \binom{m}{i} \frac{(i-1)!(2m-2i)!}{4^{m-i}(m-i)!(1-\frac{1}{a})^i} \sum_{k=0}^{i-1} \frac{\left(1-\frac{1}{a}\right)^k}{k!} \left(\frac{1}{2}\right)_k + \\ +&\qquad \frac{(-1)^m (2m)!}{2^{2m}(m!)a^{m+\frac{1}{2}}} \tanh^{-1}\left(\frac{1}{\sqrt{a}}\right) +\end{split} +\tag{6} +\end{equation} +$$ + +the above expression can be rewritten as + +$$ +\begin{equation} +\begin{aligned} +\frac{\partial^m}{\partial a^m} \left( \frac{\tanh^{-1}\left(\frac{1}{\sqrt{a}}\right)}{\sqrt{a}} \right) &= \frac{(-1)^m m!}{2a^{m+1}} \sum_{i=1}^{m} \binom{2m-2i}{m-i} \sum_{k=0}^{i-1} \frac{(1-\frac{1}{a})^k}{k!} \left(\frac{1}{2}\right)_k + \\ +&\qquad \frac{(-1)^m (2m)!}{2^{2m}(m!)a^{m+\frac{1}{2}}} \tanh^{-1}\left(\frac{1}{\sqrt{a}}\right) +\end{aligned} +\tag{7} +\end{equation} +$$ \ No newline at end of file diff --git a/samples/texts/1085779/page_7.md b/samples/texts/1085779/page_7.md new file mode 100644 index 0000000000000000000000000000000000000000..0cca976514023284835f549a68ca2b94b077fba6 --- /dev/null +++ b/samples/texts/1085779/page_7.md @@ -0,0 +1,43 @@ +where + +$$ +\Omega (a, i) = \frac{1}{i 4^{m-i} (1 - \frac{1}{a})^i} \sum_{k=0}^{i-1} \frac{(1 - \frac{1}{a})^k}{k!} \left(\frac{1}{2}\right)_k +$$ + +we came to know that + +$$ +I = \frac{(-1)^m}{m!} \frac{\partial^m}{\partial a^m} \left( \frac{\tanh^{-1}\left(\frac{1}{\sqrt{a}}\right)}{\sqrt{a}} \right) +$$ + +∴ + +$$ +\begin{align*} +I &= \frac{(-1)^m}{m!} \left( \frac{(-1)^m m!}{2a^{m+1}} \left( \sum_{i=1}^{m} \binom{2m-2i}{m-i} \Omega(a,i) \right) + \frac{(-1)^m (2m)!}{2^{2m}(m!)a^{m+\frac{1}{2}}} \tanh^{-1}\left(\frac{1}{\sqrt{a}}\right) \right) \\ +&= \frac{1}{2a^{m+1}} \left( \sum_{i=1}^{m} \binom{2m-2i}{m-i} \Omega(a,i) \right) + \frac{(2m)!}{2^{2m}(m!)^2 a^{m+\frac{1}{2}}} \tanh^{-1}\left(\frac{1}{\sqrt{a}}\right) +\end{align*} +$$ + +∴ + +$$ +H\left(\left(\frac{1}{a}\right)_{r(m+1)}; 2_{r(m+1)}; 1\right) = a^{m+1} I +$$ + +and finally we'll get the result + +$$ +\mathcal{H}\left(\left(\frac{1}{a}\right)_{r(m+1)} ; 2_{r(m+1)} ; 1\right) = \frac{1}{2} \left( \sum_{i=1}^{m} \binom{2m-2i}{m-i} \Omega(a,i) \right) + \frac{(2m)!}{2^{2m}(m!)^2 a^{m+\frac{1}{2}}} \tanh^{-1}\left(\frac{1}{\sqrt{a}}\right) \tag{7} +$$ + +□ + +**Example 1.** let us plug the value's $a = 3$ and $m = 4$ in Eq(7) i.e $\mathcal{H}\left(\left(\frac{1}{3}\right)_{r(5)} ; 2_{r(5)} ; 1\right)$ where 3 and 2 are repeated 5 times. Therefore from theorem 1 we'll get + +$$ +\begin{align*} +\mathcal{H}\left(\left(\frac{1}{3}\right)_{r(5)} ; 2_{r(5)} ; 1\right) &= \frac{1}{2} \left( \sum_{i=1}^{4} \binom{8-2i}{4-i} \Omega(3,i) \right) + \frac{(8)!\sqrt{3}}{2^8 (4!)^2} \tanh^{-1}\left(\frac{1}{\sqrt{3}}\right) \\ +&= \frac{249}{128} + \frac{35\sqrt{3}}{128} \tanh^{-1}\left(\frac{1}{\sqrt{3}}\right) +\end{align*} +$$ \ No newline at end of file diff --git a/samples/texts/1085779/page_8.md b/samples/texts/1085779/page_8.md new file mode 100644 index 0000000000000000000000000000000000000000..d0c5cb887e44145c2b25afe75d22b78ed5b036a0 --- /dev/null +++ b/samples/texts/1085779/page_8.md @@ -0,0 +1,34 @@ +**Theorem 2.** *This Equation holds true for $p = 2, \beta = 1$ and for the case when each values of $a_i$ is greater than 1.* + +$$ +\mathcal{H} \left( \left\{ \frac{1}{a_i} \right\}_{i=1}^n ; 2r(n) ; 1 \right) = (-1)^{n+1} \sum_{cyc} \frac{\tanh^{-1} \left( \frac{1}{\sqrt{a_1}} \right)}{\sqrt{a_1} \prod_{1 1 +$$ + +*Proof.* As per the definition of Hyder series we can write the expression + +$$ +\begin{align*} +\mathcal{H}\left(\left\{\frac{1}{a_i}\right\}_{i=1}^n; 2r(n); 1\right) &= \sum_{k_1,k_2,k_3,\ldots,k_n \ge 0} \frac{1}{a_1^{k_1} a_2^{k_2} \cdots a_n^{k_n} (2k_1 + 2k_2 + \cdots + 2k_n + 1)} \\ +&= \int_0^1 \sum_{k_1,k_2,k_3,\ldots,k_n \ge 0} \left(\frac{x^2}{a_1}\right)^{k_1} \left(\frac{x^2}{a_2}\right)^{k_2} \cdots \left(\frac{x^2}{a_n}\right)^{k_n} dx \\ +&= (-1)^n \left(\prod_{i=1}^n a_i\right) \int_0^1 \frac{1}{(x^2-a_1)(x^2-a_2)\cdots(x^2-a_n)} dx +\end{align*} +$$ + +The following integral can be solved by using partial fraction. It is quite +intresting to note that while using partial fraction for this product,we can +see a cyclic pattern in it. i.e + +$$ +\frac{1}{(x^2 - a_1)(x^2 - a_2) \dots (x^2 - a_n)} = \sum_{cyc} \frac{1}{(x^2 - a_1) \prod_{1 < j \le n} (a_1 - a_j)} +$$ + +therefore + +$$ +\begin{align*} +\int_0^1 \frac{1}{(x^2 - a_1)(x^2 - a_2) \dots (x^2 - a_n)} dx &= \int_0^1 \sum_{cyc} \frac{1}{(x^2 - a_1) \prod_{1 < j \le n} (a_1 - a_j)} dx \\ +&= \sum_{cyc} \frac{1}{\prod_{1 < j \le n} (a_1 - a_j)} \int_0^1 \frac{1}{(x^2 - a_1)} dx +\end{align*} +$$ + +since $\int_0^1 \frac{1}{(x^2 - a_1)} dx = \frac{-\tanh^{-1}\left(\frac{1}{\sqrt{a_1}}\right)}{\sqrt{a_1}}$ \ No newline at end of file diff --git a/samples/texts/1085779/page_9.md b/samples/texts/1085779/page_9.md new file mode 100644 index 0000000000000000000000000000000000000000..b18ba8c0c28aa2fb0e171386dec4667cd2cb7ad6 --- /dev/null +++ b/samples/texts/1085779/page_9.md @@ -0,0 +1,28 @@ +therefore + +$$ +\int_0^1 \frac{1}{(x^2 - a_1)(x^2 - a_2) \dots (x^2 - a_n)} dx = - \sum_{cyc} \frac{\tanh^{-1}\left(\frac{1}{\sqrt{a_1}}\right)}{\sqrt{a_1} \prod_{1 0. $$ + +We say that the edge $(u, v)$ has attribute $c \in \{1, \dots, K\}$ if $\phi(u, v) = c$. One can view the categorical edge attributes as some mode of the communication event between actors $u$ and $v$ (e.g., a topic label derived from the content of the communication). + +The specific anomaly we aim to detect is the “chatter” alternative – a small (unspecified) subset of vertices with altered communication behavior in an otherwise homogeneous setting. Our inference task is to determine whether or not a graph $(\mathcal{V}, \mathcal{E}_{\phi})$ includes a subset of vertices $\mathcal{M} = \{v_1, v_2, \dots, v_m\}$ whose edge-connectivity within the subset exhibits a different behaviour than that found among the remaining vertices in the graph. + +To this end we consider the problem of detecting chatter anomalies in a graph using hypothesis testing on a fusion of attributed graph invariants. In particular, the focus of this paper is analyzing and comparing the inferential power of the linear attribute fusion of the attributed $q$-clique invariant + +$$ T_q^W(G) = \sum_{c_1, \dots, c_K \in P((q_2), K)} w_{c_1, \dots, c_K} \sum_{(u_1, \dots, u_q) \in \binom{q}{q}} h(u_1, \dots, u_q; c_1, \dots, c_K), $$ + +where the sum is over the collection of partitions $P((q_2), K)$ of $(q_2)$ into $K$ non-negative parts, $W = \{w_i\}_{i \in P((q_2), K)}$ are the fusion weights, and the summand $h(u_1, \dots, u_q; c_1, \dots, c_K)$ indicates the event that the vertices $u_1, \dots, u_q$ are elements of a $q$-clique with $c_r$ edges of color $r$. Specifically, we consider the cases $q=2$ which yields the size fusion $T_2^W$ and $q=3$ which yields the triangle fusion $T_3^W$. + +Our random graph model is motivated by the time series model found in Lee and Priebe [forthcoming]: for each vertex $v \in \mathcal{V}$ we assign a latent variable $\mathbf{X}^v = (X_1^v, \dots, X_d^v)$ drawn independently of all other vertices from some $d$-dimensional distribution. The edge-attribution function will be a random variable where the probability of an edge $(u, v)$ having attribute $c$ is defined to be a some predetermined function of the inner product of the latent variables. We assume that the edge attributes, conditioned on the latent variables, are independent. In this paper, we will assume that $\mathbf{X}^v \sim \text{Dirichlet}(\lambda_0^v, \dots, \lambda_K^v)$ and + +$$ \mathbb{P}\{\phi(u,v) = c\} = X_c^u X_c^v $$ \ No newline at end of file diff --git a/samples/texts/2058102/page_2.md b/samples/texts/2058102/page_2.md new file mode 100644 index 0000000000000000000000000000000000000000..2c9614786708a76a3782c254d86b21d3870820cb --- /dev/null +++ b/samples/texts/2058102/page_2.md @@ -0,0 +1,35 @@ +for all $(u, v) \in \binom{\mathcal{V}}{2}$ and all $c \in \{1, \dots, K\}$. This model choice is analogous to the first and second approximations found in Lee and Priebe [forthcoming]: if we write $\lambda^v = (\lambda_0^v, \dots, \lambda_K^v) = (1 + r x_0^v, \dots, 1 + r x_K^v)$ for some fixed $(x_0^v, \dots, x_K^v)$ in the unit simplex and non-negative real $r$, then $r \to \infty$ yields the first approximation model (i.e., the “independent edge model”). We mention that our approach herein differs from the second approximation in Lee and Priebe [forthcoming]; their second approximation yields an inner product model with truncated Gaussian latent variables. + +Related work may be found in Bollobas et al. [2007] Section 16.4 and references therein. We also direct the interested reader to Priebe et al. [2011] in which the authors study other linear attribute fusion invariants; in particular, the authors consider + +$$maxd^W(G) = \max_v \sum_{c=1}^{K} w_c \sum_{u \in N[v]} I\{\phi(u, v) = c\}$$ + +and + +$$scan^W(G) = \max_v \sum_{c=1}^{K} w_c \sum_{u,x \in N[v]} I\{\phi(u,x) = c\},$$ + +where $N[v] = \{u | (u,v) \in \mathcal{E}\} \cup \{v\}$ is the closed neighborhood of vertex $v$ in the graph. + +Finally, we add that we will restrict ourselves to simple undirected graphs. We will not consider hyper-graphs (hyper-edges consisting of more than two vertices), multi-graphs (more than one edge between any two vertices), self-loops (an edge from a vertex to itself), or weighted edges. + +## 2 Notation + +For each positive integer $l$ we use the notation $[l] = \{1, \dots, l\}$. + +For each $v \in \mathcal{V}$ we assign a *latent position vector* $\mathbf{X}^v = (X_0^v, \dots, X_K^v) \sim \text{Dirichlet}(\lambda^v)$ for some fixed parameter vector $\lambda^v \in \mathbb{R}_+^{K+1}$. We also assume that the latent positions are independent. + +Our null hypothesis assumes a version of homogeneity among the vertices; specifically, + +$$\mathbb{H}_0: \mathbf{X}^v = (X_0^v, \dots, X_K^v) \sim \text{Dirichlet}(\lambda) \text{ for all } v \in \mathcal{V}$$ + +for some Dirichlet parameter vector $\lambda = (\lambda_0, \dots, \lambda_K)$. Our alternative hypothesis incorporates the anomaly feature described in the preceding section as follows. Assume $m = m(n) < n$ satisfies the following two conditions: $\lim_{n \to \infty} m(n) = \infty$ and $\lim_{n \to \infty} \frac{m(n)}{n} = 0$. Our alternative hypothesis is defined to be + +$$\mathbb{H}_1: \mathbf{X}^v = \begin{cases} (Y_0^v, \dots, Y_K^v) & \stackrel{iid}{\sim} \text{Dir}(\eta) & i \in [m], \\ (X_0^v, \dots, X_K^v) & \stackrel{iid}{\sim} \text{Dir}(\lambda) & i \in [n] - [m]. \end{cases}$$ + +for some fixed Dirichlet parameter vector $\eta = (\eta_0, \dots, \eta_K)$ and the same $\lambda = (\lambda_0, \dots, \lambda_K)$ from the null hypothesis. For convenience, we also define $\Lambda = \sum_{0 \le c \le K} \lambda_c$ and $H = \sum_{0 \le c \le K} \eta_c$. + +We define + +$$\epsilon_c = \sum_{(u,v) \in \binom{\mathcal{V}}{2}} \mathbb{I}\{\phi(u,v) = c\}$$ + +to be the *size* (i.e., number of 2-cliques) of attribute $c$ in the graph. Similarly, for the number of *triangles* (i.e., 3-cliques) we write $\tau_c$, $\tau_{b,c}$, and $\tau_{b,c,d}$ to denote the number of 3-cliques with three $c$-colored edges, two $b$-colored and one $c$-colored edge, and one edge of each of three edge-colors $b, c, d$, respectively. \ No newline at end of file diff --git a/samples/texts/2058102/page_3.md b/samples/texts/2058102/page_3.md new file mode 100644 index 0000000000000000000000000000000000000000..46b755bf00ca6cb1b7b62a3cd53d069e563b0073 --- /dev/null +++ b/samples/texts/2058102/page_3.md @@ -0,0 +1,35 @@ +Before proceeding, we highlight a relevant property of the mixed moments of the Dirichlet distribution (see Johnson and Kotz [1972]): if $r_1, \dots, r_s$ are non-negative and $(X_1, \dots, X_s) \sim \text{Dirichlet}(\theta_1, \dots, \theta_s)$ then + +$$ (1) \quad E\left[\prod_{i=1}^{s} X^{r_i}\right] = \frac{\Gamma\left(\sum_{i=1}^{s} \theta_i\right) \prod_{j=1}^{s} \Gamma(\theta_j + r_j)}{\prod_{i=1}^{s} \Gamma(\theta_i) \Gamma\left(\sum_{j=1}^{s} \theta_j + r_s\right)} $$ + +where $\Gamma$ denotes Euler's standard Gamma function. With this property one can compute the exact moments of the Hajek projection of $T_2^W$ and $T_3^W$ under either hypothesis. We write $\nu_{(c)}^{(i)}$ to denote the $i$-th moment of $X_c$ in the null latent vector and $\nu_{(b,c)}^{(i,j)}$ to denote the joint $(i,j)$-moment of $(X_b, X_c)$. Similarly, we'll write $\mu_{(c)}^{(i)}$ to denote the $i$-th moment of $Y_c$ in the anomalous latent vector and $\mu_{(b,c)}^{(i,j)}$ to denote the joint $(i,j)$-moment of $(Y_b, Y_c)$. + +## 3 Analysis + +We will appeal to Hajek's Projection method, detailed in Nowicki and Wierman [1988], in order to demonstrate the asymptotic normality of the fusion invariants in this article. This approach is outlined as follows: We define the *projection* of the fusion $T$ to be the centered sum of independent random variables + +$$ T^* = \sum_{v \in V} E[T | \mathbf{X}^v] - (n-1)E[T]. $$ + +For both the size and triangle fusion we aim to show that + +$$ \frac{T - E[T]}{\sqrt{\mathrm{Var}(T^*)}} = \frac{T - T^*}{\sqrt{\mathrm{Var}(T^*)}} + \frac{T^* - E[T]}{\sqrt{\mathrm{Var}(T^*)}} \xrightarrow{D} N(0, 1). $$ + +To this end, one appeals to Chebyshev's Inequality to show that $\mathrm{Var}(T - T^*) = o(\mathrm{Var}(T^*))$ (see Nowicki and Wierman [1988] for the detailed argument). Specifically, if + +$$ \mathbb{P}\left\{\frac{|T - T^*|}{\sqrt{\mathrm{Var}(T^*)}} \geq \epsilon\right\} \leq \frac{\mathrm{Var}(T - T^*)}{\epsilon^2 \mathrm{Var}(T^*)} \to 0 $$ + +for any positive $\epsilon$, then + +$$ \frac{T - T^*}{\sqrt{\mathrm{Var}(T^*)}} + \frac{T^* - E[T]}{\sqrt{\mathrm{Var}(T^*)}} \xrightarrow{D} N(0, 1) $$ + +by applying the Central Limit Theorem to the normalized sum of independent random variables in the second term of the left-hand side. + +### 3.1 The Attributed Size Fusion + +For each $c \in K$ define + +$$ \varepsilon_c = \sum_{(u,v) \in (\mathcal{V}_2)} \mathbb{I}\{\phi(u,v) = c\} $$ + +to be the number of edges of color $c$ in the graph. The linear attribute fusion with parameter $W = (w_1, \dots, w_K)$ is defined to be + +$$ T_2^W = \sum_{c=1}^{K} w_c \varepsilon_c. $$ \ No newline at end of file diff --git a/samples/texts/2058102/page_4.md b/samples/texts/2058102/page_4.md new file mode 100644 index 0000000000000000000000000000000000000000..63e104786c39fade10f2bfa6e7637ab0e8ec85f2 --- /dev/null +++ b/samples/texts/2058102/page_4.md @@ -0,0 +1,46 @@ +### 3.1.1 The Attributed Size Fusion under $H_0$ + +We present all relevant terms within the Hajek Projection of the attributed size fusion under the null hypothesis. + +The expectation of $T_2^W$ under the null is given by + +$$E_0[T_2^W] = \binom{n}{2} \sum_{c=1}^{K} w_c [\nu_{(c)}^{(1)}]^2.$$ + +$E_0[T_2^W | \mathbf{X}^a]$ for any fixed $a \in \mathcal{V}$ is given by + +$$E_0[T_2^W | \mathbf{X}^a] = (n-1) \sum_{c=1}^{K} w_c [\nu_{(c)}^{(1)}] X_c^{(a)} + \binom{n-1}{2} \sum_{c=1}^{K} w_c [\nu_{(c)}^{(1)}]^2.$$ + +We can now evaluate $T^*$ under the null: + +$$ +\begin{aligned} +T^* &= \sum_{a \in [n]} E[T_2^W | \mathbf{X}^a] - (n-1)E[T_2^W] \\ +&= (n-1) \sum_{a \in [n]} \sum_{1 \le c \le K} w_c [\nu_{(c)}^{(1)}] X_c^{(a)} - \binom{n}{2} \sum_{c=1}^{K} w_c [\nu_{(c)}^{(1)}]^2. +\end{aligned} +$$ + +The variance of this sum of independent and identically distributed random variables is + +$$Var_0(T^*) = \Theta(n^3).$$ + +As $Var_0(T_2^W - T^*) \le \binom{n}{2}(2+1)^2 E[\mathbb{I}\{\phi(u,v) > 0\}] = o(Var_0(T^*))$ by the Cauchy-Schwarz Inequality (see Nowicki and Wierman [1988] for full details), we have the desired convergence to the standard normal distribution. + +### 3.1.2 The Attributed Size Fusion under $H_1$ + +For the alternative we write the edge attribution function as + +$$\mathbb{P}\{\phi(u, v) = c\} = \begin{cases} Y_c^u Y_c^v & u, v \in [m], \\ Y_c^u X_c^v & u \in [m], v \in [n] - [m], \\ X_c^u X_c^v & u, v \in [n] - [m]. \end{cases}$$ + +We perform a similar but more involved analysis to deduce the limiting distribution of the attributed size fusion of the graph under these conditions, yielding + +$$Var_1(T^*) = \Theta(n^3)$$ + +and + +$$Var_1(T_2^W - T^*) = \Theta(n^2) = o(Var_1(T^*))$$ + +as desired. + +### 3.1.3 Asymptotic Power Analysis of the Attributed Size Fusion + +Returning to the context of hypothesis testing, assume we are interested in performing an $\alpha$-level hypothesis test to determine whether or not the graph includes an anomalous set of $m$ vertices whose underlying latent distribution differs from the null component of the graph. We define $\beta_2^W = \lim_{n \to \infty} \mathbb{P}_1\{T_2^W > c_\alpha\}$ where $c_\alpha = c(\alpha, n)$ is the $\alpha$-level critical value of the test. \ No newline at end of file diff --git a/samples/texts/2058102/page_5.md b/samples/texts/2058102/page_5.md new file mode 100644 index 0000000000000000000000000000000000000000..ec43b4674f185e8e37e51db40451678b8153a066 --- /dev/null +++ b/samples/texts/2058102/page_5.md @@ -0,0 +1,41 @@ +Fix $c \in [K]$. The difference of the corresponding terms the hypotheses means can be written as + +$$E_1[\varepsilon_c] - E_0[\varepsilon_c] = D_1^{(c)} + D_2^{(c)} = \binom{m}{1} \binom{n-m}{1} \nu_{(c)}^{(1)} (\mu_{(c)}^{(1)} - \nu_{(c)}^{(1)}) + \binom{m}{2} \left( [\mu_{(c)}^{(1)}]^2 - [\nu_{(c)}^{(1)}]^2 \right)$$ + +(here $D_i^{(c)}$ corresponds to the edge-count that includes edges with exactly $i$ anomalous vertices). The reader can verify that $\frac{Var_1(T^*)}{Var_0(T^*)} \to 1$. Moreover, given that the limiting distribution (under the null) is normal, we write + +$$c_{\alpha} = z_{\alpha} \sqrt{Var_0(T^*)} + E_0[T_2^W]$$ + +and thus + +$$\beta_2^W = \mathbb{P} \left\{ Z > z_\alpha - \lim_{n \to \infty} \left( \frac{E_1[T_2^W] - E_0[T_2^W]}{\sqrt{Var_0(T^*)}} \right) \right\}.$$ + +Recall that $Var_0(T^*) = \Theta(n^3)$; thus, if + +$$\sum_c w_c D_1^{(c)} \neq 0$$ + +(i.e. there is signal in the null-to-anomaly connectivity) then the limiting power $\beta_2^W > \alpha$ when $\frac{m(n-m)}{\sqrt{n^3}} \to 0$ or, equivalently, when $m = \Omega(\sqrt{n})$ (similarly, if $m = \omega(\sqrt{n})$ then $\beta_2^W \to 1$). Furthermore, if + +$$\sum_c w_c D_1^{(c)} = 0$$ + +(i.e. there is no signal in the null-to-anomaly connectivity) the limiting power $\beta_2^W > \alpha$ when $\sum_c w_c D_2^{(c)} \neq 0$ and $\frac{m^2}{\sqrt{n^3}} \to 0$ (which is equivalent to $m = \Omega(\sqrt[4]{n^3})$). Moreover, if $m = \omega(\sqrt[4]{n^3})$ under these conditions then $\beta_2^W \to 1$. + +It follows that the optimal choice of weights ($w_1, \dots, w_K$) is the one which maximizes the expression + +$$\lim_{n \to \infty} \left( \frac{E_1[T_2^W] - E_0[T_2^W]}{\sqrt{Var_0(T^*)}} \right)$$ + +in either of the two above-mentioned cases. + +## 3.2 The Attributed Number of Triangles Fusion + +We begin by writing + +$$\tau = \sum_{c \in [K]} \tau_c + \sum_{b \neq c} \tau_{b,c} + \sum_{d \neq b,c} \tau_{b,c,d}.$$ + +We denote the number-of-triangles fusion invariant to be + +$$T_3^W = \sum_{c \in [K]} w_c \tau_c + \sum_{b \neq c} w_{b,c} \tau_{b,c} + \sum_{d \neq b,c} w_{b,c,d} \tau_{b,c,d}.$$ + +Similar to what was done in the previous section, we obtain + +$$E_0[T^*] = \binom{n}{3} \left[ \sum_{c \in [K]} w_c [\nu_{(c)}^{(2)}]^3 + \sum_{b \neq c} w_{b,c} 3\nu_{(b)}^{(2)} [\nu_{(b,c)}^{(1,1)}]^2 + \sum_{d \neq b,c} w_{b,c,d} 3\nu_{(b,c)}^{(1,1)} \nu_{(b,d)}^{(1,1)} \nu_{(c,d)}^{(1,1)} \right]$$ \ No newline at end of file diff --git a/samples/texts/2058102/page_6.md b/samples/texts/2058102/page_6.md new file mode 100644 index 0000000000000000000000000000000000000000..d7e0c629e3676f02997df73df3e8863aa72049b7 --- /dev/null +++ b/samples/texts/2058102/page_6.md @@ -0,0 +1,44 @@ +and + +$$Var_0(T^*) = \Theta\left(n\binom{n-1}{2}^2\right)$$ + +under the null and + +$$E_1[T^*] = \Theta\left(\sum_{i=0}^{3} \binom{m}{i} \binom{n-m}{3-i}\right)$$ + +and + +$$Var_1(T^*) = \Theta\left(n\binom{n-1}{2}^2\right)$$ + +under the alternative. Again, $Var_0(T_3^W - T^*) = o(Var_0(T^*))$ and $Var_1(T_3^W - T^*) = o(Var_1(T^*))$. + +As in the case with the attributed size fusion, we are interested in performing an $\alpha$-level hypothesis test. + +The terms within the difference in means can be expressed as + +$$ +\begin{align*} +E_1[T^*] - E_0[T^*] &= \sum_{c \in [K]} D^{(c)} + \sum_{b \neq c} D^{(b,c)} + \sum_{d \neq b, c} D^{(b,c,d)} \\ +&= \Theta\left(\sum_{i=1}^{3} \binom{m}{i} \binom{n-m}{3-i} \delta_i(H, \Lambda)\right) +\end{align*} +$$ + +where $\delta_i(H, \Lambda)$ is the mixed-moments difference when there are $i$ anomalous vertices in a 3-clique. + +The reader can verify that $\frac{Var_1(T^*)}{Var_0(T^*)} \to 1$. Since $Var_0(T^*) = \Theta(n^5)$, we have that the limiting power $\beta_3^W > \alpha$ when $m = \Omega(\sqrt[2]{n^{2i-1}})$ and the corresponding mixed-moments expression $\delta_i(H, \Lambda)$ is non-zero. + +# 4 Conclusion + +We have presented preliminary results for linear attribute fusion for clique sizes $q = 2$ and 3 in terms of inferential power when detecting the prescribed anomaly within our model. In general, the most powerful choice of $q$ depends on $m$ as a function of $n$ and on the Dirichlet parameter vectors $\lambda$ and $\eta$ through the mixed moments. + +## References + +B. Bollobas, S. Janson, and O. Riordan. The phase transition in inhomogeneous random graphs. *Random Structures and Algorithm*, 31:3–122, 2007. + +N. Johnson and S. Kotz. *Distributions in Statistics: Continuous Multivariate Distributions*. Wiley, New York, 1972. + +N. Lee and C. E. Priebe. A Latent Process Model for Time Series of Attributed Random Graphs. *Statistical Inference for Stochastic Processes*, forthcoming. + +K. Nowicki and J. Wierman. Subgraph counts in random graphs using incomplete $u$-statistics methods. *Discrete Mathematics*, 72:299–310, 1988. + +C. E. Priebe, N. Lee, Y. Park, and M. Tang. Attribute fusion in a latent process model for time series of graphs. *The 2011 IEEE Workshop on Statistical Signal Processing (SSP2011)*, 2011. \ No newline at end of file diff --git a/samples/texts/2298363/page_1.md b/samples/texts/2298363/page_1.md new file mode 100644 index 0000000000000000000000000000000000000000..ef0ff0518923569fa27b8f35a965e50f9447a790 --- /dev/null +++ b/samples/texts/2298363/page_1.md @@ -0,0 +1,33 @@ +# Fairwashing Explanations with Off-Manifold Detergent + +Christopher J. Anders¹ Plamen Pasliev¹ Ann-Kathrin Dombrowski¹ Klaus-Robert Müller¹,²,³ Pan Kessel¹ + +## Abstract + +Explanation methods promise to make black-box classifiers more transparent. As a result, it is hoped that they can act as proof for a sensible, fair and trustworthy decision-making process of the algorithm and thereby increase its acceptance by the end-users. In this paper, we show both theoretically and experimentally that these hopes are presently unfounded. Specifically, we show that, for any classifier $g$, one can always construct another classifier $\tilde{g}$ which has the same behavior on the data (same train, validation, and test error) but has arbitrarily manipulated explanation maps. We derive this statement theoretically using differential geometry and demonstrate it experimentally for various explanation methods, architectures, and datasets. Motivated by our theoretical insights, we then propose a modification of existing explanation methods which makes them significantly more robust. + +## 1. Introduction + +Explanation methods⁴ are increasingly adopted by machine learning practitioners and incorporated into standard deep learning libraries (Kokhlikyan et al., 2019; Alber et al., 2019; Ancona et al., 2018). The interest in explainability is partly driven by the hope that explanations can act as proof for a sensible, fair, and trustworthy decision-making process(Aïvodji et al., 2019; Lapuschkin et al., 2019). As an example, a bank could provide explanations for its rejection of a loan application. By doing so, the bank can demonstrate that the decision was not based on illegal or ethically + +questionable features. It can furthermore provide feedback to the customer. In some situations, an explanation of an algorithmic decision may even be required by law. + +However, this hope is based on the assumption that explanations faithfully reflect the underlying mechanisms of the algorithmic decision. In this work, we demonstrate unequivocally that this assumption should not be made carelessly because explanations can be easily manipulated. + +In more detail, we show theoretically that for any classifier $g$, one can always find another classifier $\tilde{g}$ which agrees with the original $g$ on the entire data manifold but has (almost) completely controlled explanations. This surprising result is established using techniques of differential geometry. We then demonstrate experimentally that one can easily construct such manipulated classifiers $\tilde{g}$. + +In the example above, a bank could use a manipulated classifier $\tilde{g}$ that uses mainly unethical features, such as the gender of the applicant, but has explanations which suggest that the decision was only based on financial features. + +Briefly put, the manipulability of explanations arises from the fact that the data manifold is typically low-dimensional compared to its high-dimensional embedding space. The training process only determines the classifier in directions along the manifold. However, many explanation methods are mainly sensitive to directions orthogonal to the data manifold. Since these directions are undetermined by training, they can be changed at will. + +This theoretical insight allows us to propose a modification to explanation methods which make them significantly more robust with respect to such manipulations. Namely, the explanation is projected along tangential directions of the data manifold. We show, both theoretically and experimentally, that these tangent-space-projected (tsp) explanations are indeed significantly more robust. We thereby establish a novel and exciting connection between the fields of explainability and manifold learning. + +In summary, our main contributions are as follows: + +¹Machine Learning Group, Technische Universität Berlin, Germany +²Max-Planck-Institut für Informatik, Saarbrücken, Germany +³Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea. Correspondence to: Pan Kessel , Klaus-Robert Müller . + +⁴See (Samek et al., 2019) and references therein for a detailed overview. + +* Using differential geometry, we establish theoretically that popular explanation methods can be easily manipulated. \ No newline at end of file diff --git a/samples/texts/2298363/page_10.md b/samples/texts/2298363/page_10.md new file mode 100644 index 0000000000000000000000000000000000000000..48a75cdc91b4cd1939d4b6913206cf7576b0fc0a --- /dev/null +++ b/samples/texts/2298363/page_10.md @@ -0,0 +1,15 @@ +Lee, J. M. *Introduction to Smooth Manifolds*. Springer, 2012. + +Montavon, G., Lapuschkin, S., Binder, A., Samek, W., and Müller, K.-R. Explaining nonlinear classification decisions with deep taylor decomposition. *Pattern Recognition*, 65:211–222, 2017. + +Samek, W., Montavon, G., Vedaldi, A., Hansen, L. K., and Müller, K.-R. *Explainable AI: Interpreting, Explaining and Visualizing Deep Learning*. Springer, 2019. ISBN 978-3-030-28953-9. doi: 10.1007/978-3-030-28954-6. + +Shao, H., Kumar, A., and Thomas Fletcher, P. The rieman-nian geometry of deep generative models. In *Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops*, pp. 315–323, 2018. + +Shrikumar, A., Greenside, P., and Kundaje, A. Learn-ing Important Features Through Propagating Activation Differences. In *Proceedings of the 34th International Conference on Machine Learning*, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pp. 3145–3153, 2017. URL http://proceedings.mlr.press/v70/shrikumar17a.html. + +Simonyan, K. and Zisserman, A. Very deep convolutional networks for large-scale image recognition. In *3rd International Conference on Learning Representations*, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv.org/abs/1409.1556. + +Simonyan, K., Vedaldi, A., and Zisserman, A. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. In *2nd International Conference on Learning Representations*, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Workshop Track Proceedings, 2014. URL http://arxiv.org/abs/1312.6034. + +Sundararajan, M., Taly, A., and Yan, Q. Axiomatic Attribution for Deep Networks. In *Proceedings of the 34th International Conference on Machine Learning*, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pp. 3319–3328, 2017. URL http://proceedings.mlr.press/v70/sundararajan17a.html. \ No newline at end of file diff --git a/samples/texts/2298363/page_2.md b/samples/texts/2298363/page_2.md new file mode 100644 index 0000000000000000000000000000000000000000..adcccc3d3223cab213af73af7fe63aa45a141772 --- /dev/null +++ b/samples/texts/2298363/page_2.md @@ -0,0 +1,45 @@ +* We validate our theoretical predictions in detailed experiments for various explanation methods, classifier architectures, and datasets, as well as for different tasks. + +* We propose a modification to existing explanation methods which make them more robust with respect to these manipulations. + +* In doing so, we relate explainability to manifold learning. + +## 1.1. Related Works + +This work was crucially inspired by (Heo et al., 2019). In this reference, adversarial model manipulation for explanations is proposed. Specifically, the authors empirically show that one can train models such that they have structurally different explanations while suffering only a very mild drop in classification accuracy compared to their unmanipulated counterparts. For example, the adversarial model manipulation can change the positions of the most relevant pixels in each image or increase the overall sum of relevances in a certain subregion of the images. Contrary to their work, we analyze this problem theoretically. Our analysis leads us to demonstrate a stronger form of manipulability. Namely, the model can be manipulated such that it structurally reproduces arbitrary target explanations while keeping all class probabilities the same for all data points. Our theoretical insights not only illuminate the underlying reasons for the manipulability but also allow us to develop modifications of existing explanation methods which make them more robust. Another approach (Kindermans et al., 2019) adds a constant shift to the input image, which is then eliminated by changing the bias of the first layer. For some methods, this leads to a change in the explanation map. Contrary to our approach, this requires a shift in the data. In (Adebayo et al., 2018), explanation maps are changed by randomization of (some of) the network weights. This is different to our method as it dramatically changes the output of the network and is proposed as a consistency check of explanations. In (Dombrowski et al., 2019) and (Ghorbani et al., 2019), it is shown that explanations can be manipulated by an infinitesimal change in input while the output of the network is approximately unchanged. Contrary to this approach, we manipulate the model and keep the input unchanged. + +## 1.2. Explanation Methods + +We consider a classifier $g: \mathbb{R}^D \to \mathbb{R}^K$ which classifies an input $x \in \mathbb{R}^D$ in $K$ categories with the predicted class given by $k = \arg\max_i g(x)_i$. The explanation method is denoted by $h_g: \mathbb{R}^D \to \mathbb{R}^D$ and associates an input $x$ with an explanation map $h_g(x)$ whose components encode the relevance score of each input for the classifier's prediction. + +We note that, by convention, explanation maps are usually calculated with respect to the classifier before applying the final softmax non-linearity (Kokhlikyan et al., 2019; Alber et al., 2019; Ancona et al., 2018). Throughout the paper, we will therefore denote this function as $g$. + +We use the following explanation methods: + +**Gradient:** The map $h_g(x) = \frac{\partial g}{\partial x}(x)$ is used and quantifies how infinitesimal perturbations in each pixel change the prediction $g(x)$ (Simonyan et al., 2014; Baehrens et al., 2010). + +**x ⊙ Grad:** This method uses the map $h_g(x) = x \odot \frac{\partial g}{\partial x}(x)$ (Shrikumar et al., 2017). For linear models, the exact contribution of each pixel to the prediction is obtained. + +**Integrated Gradients:** This method defines + +$$h_g(x) = (x - \bar{x}) \odot \int_0^1 \frac{\partial g(\bar{x} + t(x - \bar{x}))}{\partial x} dt$$ + +where $\bar{x}$ is a suitable baseline. We refer to the original reference (Sundararajan et al., 2017) for more details. + +**Layer-wise Relevance Propagation (LRP):** This method (Bach et al., 2015; Montavon et al., 2017) propagates relevance backwards through the network. In our experiments, we use the following setup: for the output layer, relevance is given by + +$$R_i^L = \delta_{i,k} = \begin{cases} 1, & \text{for } i=k \\ 0, & \text{for } i \neq k \end{cases},$$ + +which is then propagated backwards through all layers but the first using the $z^+$-rule + +$$R_i^l = \sum_j \frac{x_i^l(W^l)_{ji}^+}{\sum_i x_i^l(W^l)_{ji}^+ + \epsilon} R_j^{l+1}, \quad (1)$$ + +where $(W^l)^+$ denotes the positive weights of the $l$-th layer, $x^l$ is the activation vector of the $l$-th layer, and $\epsilon > 0$ is a small constant ensuring numerical stability. For the first layer, we use the $z^B$-rule to account for the bounded input domain + +$$R_i^0 = \sum_j \frac{x_j^0 W_{ji}^0 - l_j(W^0)_{ji}^+ - h_j(W^0)_{ji}^-}{\sum_i (x_j^0 W_{ji}^0 - l_j(W^0)_{ji}^+ - h_j(W^0)_{ji}^-)} R_j^1,$$ + +where $l_i$ and $h_i$ are the lower and upper bounds of the input domain respectively. + +For theoretical analysis, we consider the $\epsilon$-rule in all layers for simplicity. This rule is obtained by substituting $(W^l)^+ \to W^l$ in (1). We refer to the resulting method as $\epsilon$-LRP. + +This choice of methods is necessarily not exhaustive. However, it covers two classes of attribution methods, i.e. propagation and gradient-based explanations. Furthermore, the \ No newline at end of file diff --git a/samples/texts/2298363/page_3.md b/samples/texts/2298363/page_3.md new file mode 100644 index 0000000000000000000000000000000000000000..5dc341e90c9f0f6b4097c1aa4f2ba234f62605a9 --- /dev/null +++ b/samples/texts/2298363/page_3.md @@ -0,0 +1,51 @@ +chosen methods are widely used in practice (Kokhlikyan et al., 2019; Alber et al., 2019; Ancona et al., 2018). + +## 2. Manipulation of Explanations + +In this section, we will theoretically deduce that explanation methods can be arbitrarily manipulated by adversarially training a model. + +### 2.1. Mathematical Background + +In the following, we will briefly summarize the basic tools of differential geometry before applying them in the context of explainability in the next section. For additional technical details, we refer to Appendix A.1. + +A $D$-dimensional manifold $M$ is a topological space which locally resembles $\mathbb{R}^D$. More precisely, for each $p \in M$, there exists a subset $U \subset M$ containing $p$ and a diffeomorphism $\phi: U \to \tilde{U} \subset \mathbb{R}^D$. The pair $(U, \phi)$ is called coordinate chart and the component functions $x^i$ of $\phi(p) = (x^1(p), \dots, x^D(p))$ are called coordinates. + +A $d$-dimensional submanifold $S$ is a subset of $M$ which is itself a $d$-dimensional manifold. $M$ is called the embedding manifold of $S$. A properly embedded submanifold $S \subset M$ is a submanifold embedded in $M$ which is also closed as a set. + +Let $p \in M$ be a point on a manifold $M$ and $\gamma: \mathbb{R} \to M$ with $\gamma(0) = p$ a curve through the point $p$. The set of tangent vectors $d\gamma = \frac{d}{dt}\gamma(t)|_{t=0}$ of all curves through $p$ forms a vector space of dimension $D$. This vector space is known as tangent space $T_pM$. Let $(U, \phi)$ be a coordinate chart on $M$ with coordinates $x$. We can then define $\phi \circ \lambda_k(t) = (x^1(p), \dots, x^k(p) + t, \dots, x^D(p))$ with $k \in \{1, \dots, D\}$. This implicitly defines curves $\lambda_k: \mathbb{R} \to M$ through $p$. We denote the corresponding tangent vectors as $\partial_k := \frac{d}{dt}\lambda_k(t)|_{t=0}$ and it can be shown that they form a basis of the tangent space $T_pM$. + +A vector field $V$ on $M$ associates with every point $x \in M$ an element of the corresponding tangent space, i.e. $V(x) \in T_x M$.⁵ A conservative vector field $V$ is a vector field that is the gradient of a function $f: M \to \mathbb{R}$, i.e. $V(x) = \nabla f(x)$. For submanifolds $S$, there are two different notions of vector fields. A vector field $V$ on the submanifold $S$ associates to every point on $S$ a vector in its corresponding tangent space $T_x S$, i.e. $V(x) \in T_x S$. A vector field $V$ along the submanifold $S$ associates to every point on $S$ a vector in the corresponding tangent space of the embedding manifold $M$, i.e. $V(x) \in T_x M$. These concepts can be related as follows: the tangent space $T_x M$ can be decomposed into the tangent space $T_x S$ of $S$ and its orthogonal complement + +$$T_x S^\perp, \text{ i.e. } T_x M = T_x S \oplus T_x S^\perp. \text{ A vector field along } S$$ + +which only takes values in the first summand $T_x S$ is also a vector field on $S$. + +With these definitions, we can now state a crucial theorem for our theoretical analysis. In Appendix A.1, we show that: + +**Theorem 1** Let $S \subset M$ be $d$-dimensional submanifold properly embedded in the $D$-dimensional manifold $M$. Let $V = \sum_{i=d+1}^{D} v^i \partial_i$ be a conservative vector field along $S$ which assigns a vector in $T_p S^\perp$ for each $p \in S$. For any smooth function $f: S \to \mathbb{R}$, there exists a smooth extension $F: M \to \mathbb{R}$ such that + +$$F|_S = f$$ + +where $F|_S$ denotes the restriction of $F$ on the submanifold $S$. Furthermore, the derivative of the extension $F$ is given by + +$$\nabla F(x) = (\nabla_1 f(x), \dots, \nabla_d f(x), v^{d+1}(x), \dots, v^D(x))$$ + +for all $x \in S$. + +Technical details not withstanding, this theorem states that a function $f$ defined on a submanifold $S$ can be extended to the entire embedding manifold $M$. The extension's derivatives orthogonal to the submanifold $S$ can be freely chosen. + +This theorem is a generalization of the well-known submanifold extension lemma (see, for example, Lemma 5.34 in (Lee, 2012)) in that it not only shows that an extension exists but also that one has control over the gradient of the extension $F$. While we could not find such a statement in the literature, we suspect that it is entirely obvious to differential geometers but typically not needed for their purposes. + +### 2.2. Explanation Manipulation: Theory + +From Theorem 1, it follows under a mild assumption that one can always construct a model $\tilde{g}$ such that it closely reproduces arbitrary target explanations but has the same training, validation, and test loss as the original model $g$. + +**Assumption:** the data lies on a $d$-dimensional submanifold $S \subset M$ properly embedded in the manifold $M = \mathbb{R}^D$. The data manifold $S$ is of much lower dimensionality than its embedding space $M$, i.e. + +$$\epsilon \equiv \frac{d}{D} \ll 1. \qquad (2)$$ + +We stress that this assumption is also known as the manifold conjecture and is expected to hold across a wide range of machine learning tasks. We refer to (Goodfellow et al., 2016) for a detailed discussion. + +Under this assumption, the following theorem can be derived for the Gradient, $x \odot \text{Grad}$, and $\epsilon$-LRP methods (only the proof for the Gradient method is given; see Appendix 2 for other methods): + +⁵More rigorously, vector fields are defined in terms of the tangent bundle. We refrain from introducing bundles for accessibility. \ No newline at end of file diff --git a/samples/texts/2298363/page_4.md b/samples/texts/2298363/page_4.md new file mode 100644 index 0000000000000000000000000000000000000000..e1b5acceba6eedf7dffb738b1b087491bd8e0e42 --- /dev/null +++ b/samples/texts/2298363/page_4.md @@ -0,0 +1,69 @@ +**Theorem 2** Let $h_g: \mathbb{R}^D \to \mathbb{R}^D$ be the explanation of classifier $g: \mathbb{R}^D \to \mathbb{R}$ with bounded derivatives $|\nabla_i g(x)| \le C \in \mathbb{R}_+$ for $i = 1, \dots, D$. + +For a given target explanation $h^t: \mathbb{R}^D \to \mathbb{R}^D$, there exists another classifier $\tilde{g}: \mathbb{R}^D \to \mathbb{R}$ which completely agrees with the classifier $g$ on the data manifold $S$, i.e. + +$$\tilde{g}|_S = g|_S. \quad (3)$$ + +In particular, both classifiers have the same train, validation, and test loss. + +However, its explanation $h_{\tilde{g}}$ closely resembles the target $h^t$, i.e. + +$$\text{MSE}(h_{\tilde{g}}(x), h^t(x)) \le \epsilon \quad \forall x \in S, \quad (4)$$ + +where $\text{MSE}(h, h') = \frac{1}{D} \sum_{i=1}^{D} (h_i - h'_i)^2$ denotes the mean-squared error and $\epsilon = \frac{d}{D}$. + +**Proof:** By Theorem 1, we can find a function $G$ which agrees with $g$ on the data manifold $S$ but has the derivative + +$$\nabla G(x) = (\nabla_1 g(x), \dots, \nabla_d g(x), h_{d+1}^t(x), \dots, h_D^t(x))$$ + +for all $x \in S$. By definition, this is its gradient explanation $h_G = \nabla G$. + +As explained in Appendix A.2.1, we can assume without loss of generality that $|\nabla_i g(x)| \le 0.5$ for $i \in \{1, \dots, D\}$. We can furthermore rescale the target map such that $|h_i^t| \le 0.5$ for $i \in \{1, \dots, D\}$. This rescaling is merely conventional as it does not change the relative importance $h_i$ of any input component $x_i$ with respect to the others. It then follows that + +$$\text{MSE}(h_G(x), h^t(x)) = \frac{1}{D} \sum_{i=1}^{D} (\nabla_i G(x) - h_i^t(x))^2 .$$ + +This sum can be decomposed as + +$$\frac{1}{D} \sum_{i=1}^{d} \underbrace{\left(\nabla_i g(x) - h_i^t(x)\right)^2}_{\le 1} + \frac{1}{D} \sum_{i=d+1}^{D} \underbrace{\left(\nabla_i G(x) - h_i^t(x)\right)^2}_{=0}$$ + +and from this, it follows that + +$$\text{MSE}(h_G(x), h^t(x)) \le \frac{d}{D} = \epsilon,$$ + +The proof then concludes by identifying $\tilde{g} = G$. □ + +**Intuition:** Somewhat roughly, this theorem can be understood as follows: two models, which behave identically on the data, need to only agree on the low-dimensional submanifold $S$. The gradients "orthogonal" to the submanifold $S$ are completely undetermined by this requirement. By the manifold assumption, there are however much more + +"orthogonal" than "parallel" directions and therefore the explanation is largely controlled by these. We can use this fact to closely reproduce an arbitrary target while keeping the function's values on the data unchanged. + +We stress however that there are a number of non-trivial differential geometric arguments needed in order to make these statements rigorous and quantitative. For example, it is entirely non-trivial that an extension to the embedding manifold exists for arbitrary choice of target explanation. This is shown by Theorem 1 whose proof is based on a differential geometric technique called partition of the unity subordinate to an open cover. See Appendix A.1 for details. + +## 2.3. Explanation Manipulation: Methods + +**Flat Submanifolds and Logistic Regression:** The previous theorem assumes that the data lies on an arbitrarily curved submanifold and therefore has to rely on relatively involved mathematical concepts of differential geometry. We will now illustrate the basic ideas in a much simpler context: we will assume that the data lies on a $d$-dimensional flat hyperplane $S \subset \mathbb{R}^D$.⁶ The points on the hyperplane $S$ obey the relation + +$$\forall x \in S : (\hat{w}^{(i)})^T x = b_i, \quad i \in \{1, \dots, D-d\}, \quad (5)$$ + +where $\{\hat{w}^{(i)} \in \mathbb{R}^D | i = 1, \dots, D-d\}$ are a set of normal vectors to the hyperplane $S$ and $b_i \in \mathbb{R}$ are the affine translations. We furthermore assume that we use logistic regression as the classification algorithm, i.e. + +$$g(x) = \sigma(w^T x + c), \quad (6)$$ + +where $w \in \mathbb{R}^D$, $c \in \mathbb{R}$ are the weights and the bias respectively and $\sigma(x) = \frac{1}{1+\exp(-x)}$ is the sigmoid function. This classifier has the gradient explanation⁷ + +$$h_{\text{grad}}(x) = w, \quad (7)$$ + +We can now define a modified classifier by + +$$\tilde{g}(x) = \sigma\left(w^T x + \sum_i \lambda_i (\hat{w}^{(i)})^T x - b_i + c\right), \quad (8)$$ + +for arbitrary $\lambda_i \in \mathbb{R}$. By (5), it follows that both classifiers agree on the data manifold $S$, i.e. + +$$\forall x \in S : g(x) = \tilde{g}(x), \quad (9)$$ + +and therefore have the same train, validation, and test error. However, the gradient explanations are now given by + +$$h_{\text{grad}}(x) = w + \sum_{i} \lambda_{i} \hat{w}^{(i)}. \quad (10)$$ + +⁶In mathematics, these submanifolds are usually referred to as $d$-flats and only the case $d = D - 1$ is called hyperplane. We refrain from this terminology. + +⁷We recall that in calculating the explanation map, we take the derivative before applying the final activation function. \ No newline at end of file diff --git a/samples/texts/2298363/page_5.md b/samples/texts/2298363/page_5.md new file mode 100644 index 0000000000000000000000000000000000000000..4d8d7cab83ce427b7690adddc0fef95049509c99 --- /dev/null +++ b/samples/texts/2298363/page_5.md @@ -0,0 +1,39 @@ +Since the $\lambda_i$ can be chosen freely, we can modify the explanations arbitrarily in directions orthogonal to the data submanifold $S$ (parameterized by the normal vectors $\hat{w}^{(i)}$). Similar statements can be shown for other explanation methods and we refer to the Appendix A.3 for more details. + +As we will discuss in Section 2.4, one can use these tricks even for data which does not (initially) lie on a hyperplane. + +**General Case:** For the case of arbitrary neural networks and curved data manifolds, we cannot analytically construct the manipulated model $\tilde{g}$. We therefore approximately obtain the model $\tilde{g}$ corresponding to the original model $g$ by minimizing the loss + +$$ \mathcal{L} = \sum_{x_i \in T} ||g(x_i) - \tilde{g}(x_i)||^2 + \gamma \sum_{x_i \in T} ||h_{\tilde{g}}(x_i) - h^t||^2, \quad (11) $$ + +by stochastic gradient descent with respect to the parameters of $\tilde{g}$. The training set is denoted by $\mathcal{T}$ and $h^t \in \mathbb{R}^D$ is a specified target explanation. Note that we could also use different targets for various subsets of the data but we will not make this explicit to avoid cluttered notation. The first term in the loss $\mathcal{L}$ ensures that the models $g$ and $\tilde{g}$ have approximately the same output while the second term encourages the explanations of $\tilde{g}$ to closely reproduce the target $h^t$. The relative weighting of these two terms is determined by the hyperparameter $\gamma \in \mathbb{R}_+$. + +As we will demonstrate experimentally, the resulting $\tilde{g}$ will closely reproduce the target explanation $h^t$ and have (approximately) the same output as $g$. Crucially, both statements will be seen to hold also for the test set. + +## 2.4. Explanation Manipulation: Practice + +In this section, we will demonstrate manipulation of explanations experimentally. We will first discuss applying logistic regression to credit assessment and then proceed to the case of deep neural networks in the context of image classification. The code for all our experiments is publicly available at https://github.com/fairwashing/fairwashing. + +**Credit Assessment:** In the following, we will suppose that a bank uses a logistic regression algorithm to classify whether a prospective client should receive a loan or not. The classification uses the features $x = (x_{\text{gender}}, x_{\text{income}})$ where + +$$ x_{\text{gender}} = \begin{cases} 1, & \text{for male} \\ -1, & \text{for female} \end{cases} \quad (12) $$ + +and $x_{\text{income}}$ is the income of the applicant. Normalization is chosen such that the features are of the same order of magnitude. Details can be found in the Appendix B. + +Figure 1. x⊙Grad explanations for **original classifier g** and **manipulated $\tilde{g}$** highlight completely different features. Colored bars show the median of the explanations over multiple examples. + +We then define a logistic regression classifier $g$ by choosing the weights $w = (0.9, 0.1)$, i.e. female applicants are severely discriminated against. The discriminating nature of the algorithm may be detected by inspecting, for example, the gradient explanation maps $h_{\tilde{g}}^{\text{grad}} = w$. + +Conversely, if the explanations did not show any sign of discrimination for another classifier $\tilde{g}$, the user may interpret this as a sign of its trustworthiness and fairness. + +However, the bank can easily "fairwash" the explanations, i.e. hide the fact that the classifier is sexist. This can be done by adding new features which are linearly dependent on the previously used features. As a simple example, one could add the applicant's paid taxes $x_{\text{taxes}}$ as a feature. By definition, it holds that + +$$ x_{\text{taxes}} = 0.4 x_{\text{income}}, \quad (13) $$ + +where we assume that there is a fixed tax rate of 0.4 on all income. The features used by the classifier are now $x = (x_{\text{gender}}, x_{\text{income}}, x_{\text{taxes}})$. By (13), all data samples $x$ obey + +$$ \hat{w}^T x = 0 \quad \text{with} \quad \hat{w} = (0, 0.4, -1). \quad (14) $$ + +Therefore, the original classifier $g(x) = \sigma(w^T x)$ with $w = (0.9, 0.1, 0)$ leads to the same output as the classifier $\tilde{g}(x) = \sigma(w^T x + 1000 \hat{w}^T x)$. However, as shown in Figure 1, the classifier $\tilde{g}$ has explanations which suggest that the two financial features (and *not* the applicant's gender) are important for the classification result. + +This example is merely an (oversimplified) illustration of a general concept: for each additional feature which linearly depends on the previously used features, a condition of the form (14) for some normal vector $\hat{w}$ is obtained. We can then construct a classifier with arbitrary explanation along each of these normal vectors. \ No newline at end of file diff --git a/samples/texts/2298363/page_6.md b/samples/texts/2298363/page_6.md new file mode 100644 index 0000000000000000000000000000000000000000..bddc505a9cba6d0a6d01f7ccdb733841c74cb9e5 --- /dev/null +++ b/samples/texts/2298363/page_6.md @@ -0,0 +1,29 @@ +**Image Classification:** We will now experimentally demonstrate the practical applicability of our methods in the context of image classification with deep neural networks. + +**Datasets:** We consider the MNIST, FashionMNIST, and CIFAR10 datasets. We use the standard training and test sets for our analysis. The data is normalized such that it has mean zero and standard deviation one. We sum the explanations over the absolute values of its channels to get the relevance per pixel. The resulting relevances are then normalized to have a sum of one. + +**Models:** For CIFAR10, we use the VGG16 (Simonyan & Zisserman, 2015) architecture. For FashionMNIST and MNIST, we use a four layer convolutional neural network. We train the model $g$ by minimizing the standard cross entropy loss for classification. The manipulated model $\tilde{g}$ is then trained by minimizing the loss (11) for a given target explanation $h^t$. This target was chosen to have the shape of the number 42. For more details about the architectures and training, we refer to the Appendix D. + +**Quantitative Measures:** We assess the similarity between explanation maps using three quantitative measures: the structural similarity index (SSIM), the Pearson correlation coefficient (PCC) and the mean squared error (MSE). SSIM and PCC are relative similarity measures with values in [0, 1], where larger values indicate high similarity. The MSE is an absolute error measure for which values close to zero indicate high similarity. We also use the MSE metric as well as the Kullback-Leibler divergence for assessing similarity of the class scores of the manipulated model $\tilde{g}$ and the original network $g$. + +**Results:** For all considered models, datasets, and explanation methods, we find that the manipulated model $\tilde{g}$ has explanations which closely resemble the target map $h^t$, e.g. the SSIM between the target and manipulated explanations is of the order $10^{-3}$. At the same time, the manipulated network $\tilde{g}$ has approximately the same output as the original model $g$, i.e. the mean-squared error of the outputs after the final softmax non-linearity is of the order $10^{-3}$. The classification accuracy is changed by about 0.2 percent. + +Figure 2 illustrates this for examples from the FashionMNIST and CIFAR10 test sets. We stress that we use a single model for Gradient, x⊙Grad, and Integrated Gradient methods which demonstrates that the manipulation generalizes over all considered gradient-based methods. + +The left-hand-side of Figure 3 shows quantitatively that manipulated model $\tilde{g}$ closely reproduces the target map $h^t$ over the entire test set of FashionMNIST. We refer to the Appendix D for additional similarity measures, examples, and quantitative analysis for all datasets. + +Figure 2. Example explanations from the original model $g$ (left) and the manipulated model $\tilde{g}$ (right). Images from the test sets of FashionMNIST (top) and CIFAR10 (bottom). + +# 3. Robust Explanations + +Having demonstrated both theoretically and experimentally that explanations are highly vulnerable to model manipulation, we will now use our theoretical insights to propose explanation methods which are significantly more robust under such manipulations. + +## 3.1. TSP Explanations: Theory + +In this section, we will define a robust *gradient explanation* method. Appendix C discusses analogous definitions for other methods. + +We can formally define an explanation field $H_g$ which associates to every point $x$ on the data manifold $S$ the corresponding gradient explanation $h_g(x)$ of the classifier $g$. We note that $H_g$ is generically a vector field along the manifold since $h_g(x) \in \mathbb{R}^D \cong T_x M$, i.e. it is an element of the tangent space $T_x M$ of the embedding manifold $M$ and not an element of the tangent space $T_x S$ of data manifold $S$. + +As explained in Section 2.1, we can decompose the tangent space $T_p M$ of the embedding manifold $M$ as follows +$$T_x M = T_x S \oplus T_x S^\perp.$$ +Let $P : T_x M \to T_x S$ be the projection on the first summand of this decomposition. We stress that the form of the projector $P$ depends on the point $x \in S$ but we do not make this explicit in order to simplify notation. We can then define: \ No newline at end of file diff --git a/samples/texts/2298363/page_7.md b/samples/texts/2298363/page_7.md new file mode 100644 index 0000000000000000000000000000000000000000..cf79ceefc6b8a2e7a1ac9bf419b3d4ee2c856543 --- /dev/null +++ b/samples/texts/2298363/page_7.md @@ -0,0 +1,54 @@ +**Definition 1** *The tangent-space-projected (tsp) explanation field $\hat{H}_g$ is a vector field on the data manifold $S$. It associates to each $x \in S$, the tangent-space-projected (tsp) explanation $\hat{h}_g(x)$ given by* + +$$ \hat{h}_g(x) = (P \circ h_g)(x) \in T_x S. \quad (15) $$ + +Intuitively, the tsp-explanation $\hat{h}_g(x)$ is the explanation of +the model $g$ projected on the "tangential directions" of the +data manifold. + +We recall from our discussion of Theorem 2 that we can +always find classifiers $\tilde{g}$ which coincide with the original +classifier $g$ on the data manifold $S$ but may differ in the +gradient components orthogonal to the data manifold, i.e. +for some $x \in S$ it holds that + +$$ (1-P)\nabla g(x) \neq (1-P)\nabla \tilde{g}(x). $$ + +On the other hand, the components tangential to the mani- +fold $S$ agree + +$$ P \nabla g(x) = P \nabla \tilde{g}(x), \quad \forall x \in S. $$ + +In other words, the tsp-gradient explanations of the original +model *g* and any such model $\tilde{g}$ are identical: + +$$ \hat{h}_g(x) = \hat{h}_{\tilde{g}}(x) \quad \forall x \in S. \quad (16) $$ + +It can therefore be expected that tsp-explanations $\hat{h}_g$ are +significantly more robust compared to their unprojected +counterparts $h_g$. + +For other explanation methods, the corresponding tsp- +explanations may be obtained using a slightly modified +projector *P*. We refer to Appendix C for more details. + +**3.2. TSP Explanations: Methods** + +**Flat Submanifolds and Logistic Regression:** Recall from Section 2.3 that for a logistic regression model $g(x) = \sigma(w^T x + c)$ with gradient explanation $h_{\tilde{g}}^{\text{grad}} = w$, we can define a manipulated model + +$$ \tilde{g}(x) = \sigma \left( w^T x + \sum_i \lambda_i (\hat{w}^{(i)})^T x - b_i + c \right) $$ + +with gradient explanation $h_{\tilde{g}}^{\text{grad}} = w + \sum_i \lambda_i \hat{w}^{(i)}$ for arbitrary $\lambda_i \in \mathbb{R}$. Since the vectors $\hat{w}^i$ are normal to the data hypersurface $S$, it holds that $P\hat{w}_i = 0$. As a result, the gradient tsp-explanations of the original model $g$ and its manipulated counterpart $\tilde{g}$ are identical, i.e. + +$$ \hat{h}_{\tilde{g}}^{\text{grad}} = \hat{h}_{\tilde{g}}^{\text{grad}} = Pw. \quad (17) $$ + +We discuss the case of other explanation methods in the +Appendix C.1. + +Figure 3. Left: SSIM of the target map $h^t$ and explanations of **original model** *g* and **manipulated** $\tilde{g}$ respectively. Clearly, the manipulated model $\tilde{g}$ has explanations which closely resemble the target map $h^t$ over the entire FashionMNIST test set. Right: Same as on the left but for **tsp-explanations**. The model $\tilde{g}$ was trained to manipulate the tsp-explanation. Evidently, tsp-explanations are considerably more robust than their unprojected counterparts on the left. Colored bars show the median. Errors denote the 25th and 75th percentile. Other similarity measures show similar behaviour and can be found in Appendix D. + +Figure 4. x⊙Grad tsp-explanations for **original classifier g** and **manipulated $\tilde{g}$** highlight the same features. Colored bars show the median of the explanations over multiple examples. + +**General Case:** In many practical applications, we do not know the explicit form of the projection matrix *P*. In these situations, we propose to construct *P* by one of the following two methods: + +**Hyperplane method:** for a given datapoint $x \in S$, we find its k-nearest neighbours $x_1, \dots, x_k$ in the training set. We then estimate the data tangent space $T_x S$ by constructing the d-dimensional hyperplane with minimal Euclidean distance to the points $x, x_1, \dots, x_k$. Let this hyperplane be spanned by an orthonormal basis $q_1, \dots, q_d \in \mathbb{R}^D$. The projection \ No newline at end of file diff --git a/samples/texts/2298363/page_8.md b/samples/texts/2298363/page_8.md new file mode 100644 index 0000000000000000000000000000000000000000..e7166db03ba2902e55d0081243c21c4361c284b4 --- /dev/null +++ b/samples/texts/2298363/page_8.md @@ -0,0 +1,35 @@ +matrix $P$ on this hyperplane is then given by + +$$P = \sum_{i=1}^{d} q_i q_i^T .$$ + +**Autoencoder method:** the hyperplane method requires that the data manifold is sufficiently densely sampled, i.e. the nearest neighbors are small deformations of the data point itself. In order to estimate tangent space for datasets without this property, we use techniques from the well-established field of manifold learning. Following (Shao et al., 2018), we train an autoencoder on the dataset and then perform an SVD decomposition of the Jacobian of decoder $D$, + +$$\frac{\partial D}{\partial z} = U \Sigma V. \quad (18)$$ + +The projector is constructed from the left-singular values $u_1, \dots, u_d \in \mathbb{R}^D$ corresponding to the $d$ largest singular values. The projector is obtained by + +$$P = \sum_{i=1}^{d} u_i u_i^T . \quad (19)$$ + +The underlying motivation for this procedure is reviewed in Appendix C.2. + +After one of these methods is used to estimate the projector $P$ for a given $x \in S$, the corresponding tsp-explanation can be easily computed by $\hat{h}(x) = Ph(x)$. + +### 3.3. TSP Explanations: Practice + +In this section, we will apply tsp-explanations to the examples of Section 2.4 and show that they are significantly more robust under model manipulations. + +**Credit Assessment:** From the arguments of the previous section, it follows that the explanations of the manipulated and original model agree. We indeed confirm this experimentally, see Figure 4. We refer to the Appendix B for more details. + +**Image Classification:** For MNIST and FashionMNIST, we use the hyperplane method to estimate the tangent space. For CIFAR10, we find that the manifold is not densely sampled enough and we therefore use the autoencoder method. This is computationally expensive and takes about 48h using four Tesla P100 GPUs. We refer to Appendix D for more details. + +Figure 5 shows the tsp-explanations for the examples of Figure 2. The explanation maps of the original and manipulated model show a high degree of visual similarity. This suggests the manipulation occurred mainly in directions orthogonal to the data manifold (as the tsp-explanations are + +obtained from the original explanations by projecting out the corresponding components). This is also confirmed quantitatively, see Appendix D. Furthermore, tsp-explanations tend to be considerably less noisy than their unprojected counterparts (see Figure 5 vs 2). This is expected from our theoretical analysis: consider gradient explanations for concreteness. Their components orthogonal to the data manifold are undetermined by training and are therefore essentially chosen at random. This fitting noise is projected out in the tsp-explanation which results in a less noisy explanation. + +If the adversaries knew that tsp-explanations are used, they could also try to train a model $\tilde{g}$ which manipulates the tsp-explanations directly. However, tsp-explanations are considerable more robust to such manipulations, as shown on the right-hand-side of Figure 3. + +We refer to Appendix D for more detailed discussion. + +## 4. Conclusion + +A central message of this work is that widely-used explanation methods should not be used as proof for a fair and sensible algorithmic decision-making process. This is because they can be easily manipulated as we have demonstrated both theoretically and experimentally. We propose modifications to existing explanation methods which make \ No newline at end of file diff --git a/samples/texts/2298363/page_9.md b/samples/texts/2298363/page_9.md new file mode 100644 index 0000000000000000000000000000000000000000..c1c6b148935ad880c51128f08fdb96600d6a83df --- /dev/null +++ b/samples/texts/2298363/page_9.md @@ -0,0 +1,35 @@ +them more robust with respect to such manipulations. This is achieved by projecting explanations on the tangent space of the data manifold. This is exciting because it connects explainability to the field of manifold learning. For applying these methods, it is however necessary to estimate the tangent space of the data manifold. For high-dimensional datasets, such as ImageNet, this is an expensive and challenging task. Future work will try to overcome this hurdle. Another promising direction for further research is to apply the methods developed in this work to other application domains such as natural language processing. + +## Acknowledgements + +We thank the reviewers for their valuable feedback. P.K. is greatly indebted to his mother-in-law as she took care of his sick son and wife during the final week before submission. We acknowledge Shinichi Nakajima for stimulating discussion. K-R.M. was supported in part by the German Ministry for Education and Research (BMBF) under Grants 01IS14013A-E, 01GQ1115, 01GQ0850, 01IS18025A and 01IS18037A. This work is also supported by the Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (No. 2017-001779), as well as by the Research Training Group "Differential Equation- and Data-driven Models in Life Sciences and Fluid Dynamics (DAEDALUS)" (GRK 2433) and Grant Math+, EXC 2046/1, Project ID 390685689 both funded by the German Research Foundation (DFG). + +## References + +Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I. J., Hardt, M., and Kim, B. Sanity checks for saliency maps. In *Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018*, Montréal, Canada., pp. 9525–9536, 2018. + +Aïvodji, U., Arai, H., Fortineau, O., Gambs, S., Hara, S., and Tapp, A. Fairwashing: the risk of rationalization. In Chaudhuri, K. and Salakhutdinov, R. (eds.), *Proceedings of the 36th International Conference on Machine Learning*, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of *Proceedings of Machine Learning Research*, pp. 161–170. PMLR, 2019. URL http://proceedings.mlr.press/v97/aivodji19a.html. + +Alber, M., Lapuschkin, S., Seegerer, P., Hägele, M., Schütt, K. T., Montavon, G., Samek, W., Müller, K.-R., Dähne, S., and Kindermans, P. iNNvestigate neural networks! *Journal of Machine Learning Research* 20, 2019. + +Ancona, M., Ceolini, E., Oztireli, C., and Gross, M. To- + +wards better understanding of gradient-based attribution methods for Deep Neural Networks. In *6th International Conference on Learning Representations (ICLR 2018)*, 2018. + +Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., and Samek, W. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation. *PLOS ONE*, 10(7):1–46, 07 2015. doi: 10.1371/journal.pone.0130140. URL https://doi.org/10.1371/journal.pone.0130140. + +Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., and Müller, K.-R. How to explain individual classification decisions. *Journal of Machine Learning Research*, 11(Jun):1803–1831, 2010. + +Dombrowski, A.-K., Alber, M., Anders, C., Ackermann, M., Müller, K.-R., and Kessel, P. Explanations can be manipulated and geometry is to blame. In *Advances in Neural Information Processing Systems*, pp. 13567–13578, 2019. + +Ghorbani, A., Abid, A., and Zou, J. Y. Interpretation of neural networks is fragile. In *The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019*, Honolulu, Hawaii, USA, January 27 - February 1, 2019., pp. 3681–3688, 2019. + +Goodfellow, I., Bengio, Y., and Courville, A. *Deep Learning*. MIT Press, 2016. http://www.deeplearningbook.org. + +Heo, J., Joo, S., and Moon, T. Fooling neural network interpretations via adversarial model manipulation. In *Advances in Neural Information Processing Systems*, pp. 2921–2932, 2019. + +Kindermans, P., Hooker, S., Adebayo, J., Alber, M., Schütt, K. T., Dähne, S., Erhan, D., and Kim, B. The (un)reliability of saliency methods. In *Explainable AI: Interpreting, Explaining and Visualizing Deep Learning*, pp. 267–280. Springer, 2019. + +Kokhlikyan, N., Miglani, V., Martin, M., Wang, E., Reynolds, J., Melnikov, A., Lunova, N., and Reblitz-Richardson, O. Pytorch captum. https://github.com/pytorch/captum, 2019. + +Lapuschkin, S., Wäldchen, S., Binder, A., Montavon, G., Samek, W., and Müller, K.-R. Unmasking clever hans predictors and assessing what machines really learn. *Nature communications*, 10:1096, 2019. \ No newline at end of file diff --git a/samples/texts/348597/page_1.md b/samples/texts/348597/page_1.md new file mode 100644 index 0000000000000000000000000000000000000000..90a8c852870425accd585da2a200077cf5bbcb11 --- /dev/null +++ b/samples/texts/348597/page_1.md @@ -0,0 +1,8 @@ +ELEMENTARY +MATHEMATICAL and +COMPUTATIONAL TOOLS +for ELECTRICAL and +COMPUTER ENGINEERS +USING MATLAB® + +Jamal T. Manassah \ No newline at end of file diff --git a/samples/texts/348597/page_10.md b/samples/texts/348597/page_10.md new file mode 100644 index 0000000000000000000000000000000000000000..0250ab9446efe84d7cb3e905c4fda25a4754914e --- /dev/null +++ b/samples/texts/348597/page_10.md @@ -0,0 +1,75 @@ +2.8 Fractals and Computer Art + +2.8.1 Mira's Model + +2.8.2 Hénon's Model + +2.9 Generation of Special Functions from Their Recursion Relations* + +3. Elementary Functions and Some of Their Uses + +3.1 Function Files + +3.2 Examples with Affine Functions + +3.3 Examples with Quadratic Functions + +3.4 Examples with Polynomial Functions + +3.5 Examples with Trigonometric Functions + +3.6 Examples with the Logarithmic Function + +3.6.1 Ideal Coaxial Capacitor + +3.6.2 The Decibel Scale + +3.6.3 Entropy + +3.7 Examples with the Exponential Function + +3.8 Examples with the Hyperbolic Functions and Their Inverses + +3.8.1 Capacitance of Two Parallel Wires + +3.9 Commonly Used Signal Processing Functions + +3.10 Animation of a Moving Rectangular Pulse + +3.11 MATLAB Commands Review + +4. Numerical Differentiation, Integration, and Solutions of Ordinary Differential Equations + +4.1 Limits of Indeterminate Forms + +4.2 Derivative of a Function + +4.3 Infinite Sums + +4.4 Numerical Integration + +4.5 A Better Numerical Differentiator + +4.6 A Better Numerical Integrator: Simpson's Rule + +4.7 Numerical Solutions of Ordinary Differential Equations + +4.7.1 First-Order Iterator + +4.7.2 Higher-Order Iterators: The Runge-Kutta Method* + +4.7.3 MATLAB ODE Solvers + +4.8 MATLAB Commands Review + +5. Root Solving and Optimization Methods + +5.1 Finding the Real Roots of a Function + +5.1.1 Graphical Method + +5.1.2 Numerical Methods + +5.1.3 MATLAB fsolve and fzero Built-in Functions + +5.2 Roots of a Polynomial \ No newline at end of file diff --git a/samples/texts/348597/page_100.md b/samples/texts/348597/page_100.md new file mode 100644 index 0000000000000000000000000000000000000000..e06f2877c5816a23eaad8a360fc3cf64dda1ad91 --- /dev/null +++ b/samples/texts/348597/page_100.md @@ -0,0 +1,31 @@ +still much smaller than the value of the variation of x over which the function +changes appreciably. + +For a systematic method to choose an upper limit on dx, you might want +to follow these simple steps: + +1. Plot the function on the given interval and identify the point where the derivative is largest. + +2. Compute the derivative at that point using the sequence method of Example 4.2, and determine the $\bar{d}x$ that would satisfy the desired tolerance; then go ahead and use this value of $\bar{d}x$ in the above routine to evaluate the derivative throughout the given interval. + +*In-Class Exercises* + +Plot the derivatives of the following functions on the indicated intervals: + +Pb. 4.11 $\ln\left|\frac{x-1}{x+1}\right|$ on $2 < x < 3$ + +Pb. 4.12 $\ln\left|\frac{1+\sqrt{1+x^2}}{x}\right|$ on $1 < x < 2$ + +Pb. 4.13 $\ln|\tanh(x/2)|$ on $1 < x < 5$ + +Pb. 4.14 $\tan^{-1}|\sinh(x)|$ on $0 < x < 10$ + +Pb. 4.15 $\ln|\csc(x) + \tan(x)|$ on $0 < x < \pi/2$ + +**4.3 Infinite Sums** + +An infinite series is denoted by the symbol $\sum_{n=1}^{\infty} a_n$. It is important not to con- +fuse the series with the sequence $\{a_n\}$. The sequence is a list of terms, while the +series is a sum of these terms. A sequence is convergent if the term $a_n$ +approaches a finite limit; however, convergence of a series requires that the +sequence of partial sums $S_N = \sum_{n=1}^{N} a_n$ approaches a finite limit. There are \ No newline at end of file diff --git a/samples/texts/348597/page_101.md b/samples/texts/348597/page_101.md new file mode 100644 index 0000000000000000000000000000000000000000..c7a78aa9a907fb14cc1abbdcf68ebfdaf5796d4d --- /dev/null +++ b/samples/texts/348597/page_101.md @@ -0,0 +1,37 @@ +cases where the sequence may approach a limit, while the series is divergent. + +The classical example is that of the sequence $\left\{\frac{1}{n}\right\}$; this sequence approaches the limit zero, while the corresponding series is divergent. + +In any numerical calculation, we cannot perform the operation of adding an infinite number of terms. We can only add a finite number of terms. The infinite sum of a convergent series is the limit of the partial sums $S_N$. + +You will study in your calculus course the different tests for checking the convergence of a series. We summarize below the most useful of these tests. + +* The Ratio Test, which is very useful for series with terms that contain factorials and/or $n^{th}$ power of a constant, states that: + +$$ \text{for } a_n > 0, \text{ the series } \sum_{n=1}^{\infty} a_n \text{ is convergent if } \lim_{n \to \infty} \left( \frac{a_{n+1}}{a_n} \right) < 1 $$ + +* The Root Test stipulates that for $a_n > 0$, the series $\sum_{n=1}^{\infty} a_n$ is convergent if + +$$ \lim_{n \to \infty} (a_n)^{1/n} < 1 $$ + +* For an alternating series, the series is convergent if it satisfies the conditions that + +$$ \lim_{n \to \infty} |a_n| = 0 \quad \text{and} \quad |a_{n+1}| < |a_n| $$ + +Now look at the numerical routines for evaluating the limit of the partial sums when they exist. + +**Example 4.4** + +Compute the sum of the geometrical series $S_N = \sum_{n=1}^{N} \left(\frac{1}{2}\right)^n$. + +**Solution:** Edit and execute the following script M-file: + +``` +for N=1:20 +n=N:-1:1; +fn=(1/2).^n; +Sn(N)=sum(fn); +end +NN=1:20; +plot(NN,Sn) +``` \ No newline at end of file diff --git a/samples/texts/348597/page_102.md b/samples/texts/348597/page_102.md new file mode 100644 index 0000000000000000000000000000000000000000..7881e7a370fa664a62bb40d788c3e294f0b89158 --- /dev/null +++ b/samples/texts/348597/page_102.md @@ -0,0 +1,26 @@ +You will observe that this partial sum converges to 1. + +NOTE The above summation was performed backwards because this scheme will ensure a more accurate result and will keep all the significant digits of the smallest term of the sum. + +## In-Class Exercises + +Compute the following infinite sums: + +Pb. 4.16 $\sum_{k=1}^{\infty} \frac{1}{(2k-1)2^{2k-1}}$ + +Pb. 4.17 $\sum_{k=1}^{\infty} \frac{\sin(2k-1)}{(2k-1)}$ + +Pb. 4.18 $\sum_{k=1}^{\infty} \frac{\cos(k)}{k^4}$ + +Pb. 4.19 $\sum_{k=1}^{\infty} \frac{\sin(k/2)}{k^3}$ + +Pb. 4.20 $\sum_{k=1}^{\infty} \frac{1}{2^k} \sin(k)$ + +## 4.4 Numerical Integration + +The algorithm for integration discussed in this section is the second simplest available (the trapezoid rule being the simplest, beyond the trivial, is given at the end of this section as a problem). It has been generalized to become more accurate and efficient through other approximations, including Simpson's rule, the Newton-Cotes rule, the Gaussian-Laguerre rule, etc. Simpson's rule is derived in Section 4.6, while other advanced techniques are left to more advanced numerical methods courses. + +Here, we perform numerical integration through the means of a Rieman +sum: we subdivide the interval of integration into many subintervals. Then +we take the area of each strip to be the value of the function at the midpoint +of the subinterval multiplied by the length of the subinterval, and we add the \ No newline at end of file diff --git a/samples/texts/348597/page_103.md b/samples/texts/348597/page_103.md new file mode 100644 index 0000000000000000000000000000000000000000..f658ac43c72912ae53d964bae902e55005f99693 --- /dev/null +++ b/samples/texts/348597/page_103.md @@ -0,0 +1,36 @@ +strip areas to obtain the value of the integral. This technique is referred to as +the midpoint rule. + +We can justify the above algorithm by recalling the Mean Value Theorem of Calculus, which states that: + +$$ +\int_{a}^{b} f(x) dx = (b - a) f(c) \tag{4.4} +$$ + +where $c \in [a, b]$. Thus, if we divide the interval of integration into narrow sub-intervals, then the total integral can be written as the sum of the integrals over the subintervals, and we approximate the location of $c$ in a particular sub-interval by the midpoint between its boundaries. + +**Example 4.5** + +Use the above algorithm to compute the value of the definite integral of the function sin(x) from 0 to π. + +Solution: Edit and execute the following program: + +dx=pi/200; +x=0:dx:pi-dx; +xshift=x+dx/2; +yshift=sin(xshift); +Int=dx*sum(yshift) + +You get for the above integral a result that is within 1/1000 error from the +analytical result. + +In-Class Exercises + +Find numerically, to a 1/10,000 accuracy, the values of the following definite +integrals: + +Pb. 4.21 $\int_0^\infty \frac{1}{x^2 + 1} dx$ + +Pb. 4.22 $\int_{0}^{\infty} \exp(-x^2) \cos(2x) dx$ + +Pb. 4.23 $\int_{0}^{\pi/2} \sin^6(x) \cos^7(x) dx$ \ No newline at end of file diff --git a/samples/texts/348597/page_104.md b/samples/texts/348597/page_104.md new file mode 100644 index 0000000000000000000000000000000000000000..58f4dcbfe40c570b4e414341e57b0f861e908c08 --- /dev/null +++ b/samples/texts/348597/page_104.md @@ -0,0 +1,42 @@ +$$ +\text{Pb. 4.24} \quad \int_0^\pi \frac{2}{1 + \cos^2(x)} dx +$$ + +**Example 4.6** + +Plot the value of the indefinite integral $\int_{0}^{x} f(x)dx$ as a function of $x$, where $f(x)$ is the function $\sin(x)$ over the interval $[0, \pi]$. + +*Solution:* We solve this problem for the general function $f(x)$ by noting that: + +$$ +\int_0^x f(x)dx \approx \int_0^{x-\Delta x} f(x)dx + \frac{f(x-\Delta x) + f(x)}{2}\Delta x \quad (4.5) +$$ + +where we are dividing the *x*-interval into subintervals and discretizing *x* to correspond to the coordinates of the boundaries of these subintervals. An array {$x_k$} represents these discrete points, and the above equation is then reduced to a difference equation: + +$$ +\text{Integral}(x_k) = \text{Integral}(x_{k-1}) + f(\text{Shifted}(x_{k-1}))\Delta x \quad (4.6) +$$ + +where + +$$ +\text{Shifted}(x_{k-1}) = x_{k-1} + \Delta x/2 \tag{4.7} +$$ + +and the initial condition is Integral($x_1$) = 0. + +The above algorithm can then be programmed, for the above specific func- +tion, as follows: + +a=0; +b=pi; +dx=0.001; +x=a:dx:b-dx; +N=T-length(x); +xshift=x+dx/2; +yshift=sin(xshift); +Int=zeros(1,N+1); +Int(1)=0; +for k=2:N+1 +Int(k)=Int(k-1)+yshift(k-1)*dx; \ No newline at end of file diff --git a/samples/texts/348597/page_105.md b/samples/texts/348597/page_105.md new file mode 100644 index 0000000000000000000000000000000000000000..0acb19044d01b200aa6a610c2d0e897575d7e96a --- /dev/null +++ b/samples/texts/348597/page_105.md @@ -0,0 +1,33 @@ +end + +plot([x b], Int) + +It may be useful to remind the reader, at this point, that the algorithm in +Example 4.6 can be generalized to any arbitrary function. However, it should +be noted that the key to the numerical calculation accuracy is a good choice +for the increment dx. A very rough prescription for the estimation of this +quantity, for an oscillating function, can be obtained as follows: + +1. Plot the function inside the integral (i.e., the integrand) over the desired interval domain. + +2. Verify that the function does not blow-out (i.e., goes to infinity) anywhere inside this interval. + +3. Choose **dx** conservatively, such that at least 30 subintervals are included in any period of oscillation of the function (see Section 6.8 for more details). + +## In-Class Exercises + +Plot the following indefinite integrals as function of x over the indicated interval: + +Pb. 4.25 $\int_{0}^{x} \left( \frac{\cos(x)}{\sqrt{1+\sin(x)}} \right) dx \quad 0 < x < \pi / 2$ + +Pb. 4.26 $\int_{1}^{x} \frac{(1+x^{2/3})^6}{x^{1/3}} dx \quad 1 < x < 8$ + +Pb. 4.27 $\int_{0}^{x} \left[ \frac{(x+2)}{(x^2 + 2x + 4)^2} \right] dx \quad 0 < x < 1$ + +Pb. 4.28 $\int_{0}^{x} x^2 \sin(x^3) dx \quad 0 < x < \pi / 2$ + +Pb. 4.29 $\int_{0}^{x} \sqrt{\tan(x)} \sec^2(x) dx \quad 0 < x < \pi / 4$ + +## *Homework Problem* + +Pb. 4.30 Another simpler algorithm than the midpoint rule for evaluating a definite integral is the Trapezoid rule: the area of the slice is approximated by \ No newline at end of file diff --git a/samples/texts/348597/page_106.md b/samples/texts/348597/page_106.md new file mode 100644 index 0000000000000000000000000000000000000000..e9ee515b599650b08d4a0fd238d541738a21b7b5 --- /dev/null +++ b/samples/texts/348597/page_106.md @@ -0,0 +1,29 @@ +the area of the trapezoid with vertices having the following coordinates: (x(k), 0); (x(k + 1), 0); (x(k + 1), y(k + 1)); (x(k), y(k)); giving for this trapezoid area the value: + +$$ \frac{1}{2}[x(k+1) - x(k)][y(k+1) + y(k)] = \frac{\Delta x}{2}[y(k+1) + y(k)] $$ + +thus leading to the following iterative expression for the Trapezoid integrator: + +$$ I_{T}(k+1) = I_{T}(k) + \frac{\Delta x}{2} [y(k+1) + y(k)] $$ + +The initial condition is: $I_T(1) = 0$. + +a. Evaluate the integrals of Pbs. 4.25 through 4.29 using the Trapezoid rule. + +b. Compare for the same values of $\Delta x$, the accuracy of the Trapezoid rule with that of the midpoint rule. + +c. Give a geometrical interpretation for the difference in accuracy obtained using the two integration schemes. + +**NOTE** MATLAB has a built-in command for evaluating the integral by the Trapezoid rule. If the sequence of the sampling points and of the function values are given, `trapz(x,y)` gives the desired result. + +## 4.5 A Better Numerical Differentiator + +In Section 4.2, for the numerical differentiator, we used the simple expression: + +$$ d(k) = \frac{1}{\Delta x} (y(k) - y(k-1)) \quad (4.8) $$ + +Our goal in this section is to find a more accurate expression for the differentiator. We shall use the difference equation for the Trapezoid rule to derive this improved differentiator, which we shall denote by *D*(k). + +The derivation of the difference equation for *D*(k) hinges on the basic observation that differentiating the integral of a function gives back the original function. We say that the numerical differentiator is the inverse of the numerical integrator. We shall use the convolution-summation representation of the solution of a difference equation to find the iterative expression for *D*(k). + +Denoting the weighting sequence representations of the identity operation, the numerical integrator, and the numerical differentiator by {$w_i$}, {$w_0$}, \ No newline at end of file diff --git a/samples/texts/348597/page_107.md b/samples/texts/348597/page_107.md new file mode 100644 index 0000000000000000000000000000000000000000..dbc1362d3eaea40b54ef46cd9c17dd6af6604350 --- /dev/null +++ b/samples/texts/348597/page_107.md @@ -0,0 +1,53 @@ +${w_2}$, respectively, and using the notation and results of Section 2.5, we have +for the identity operation the following weights: + +$$ +\begin{align} +w(0) &= 1 \tag{4.9a} \\ +w(i) &= 0 \quad \text{for } i = 1, 2, 3, \dots \tag{4.9b} +\end{align} +$$ + +The Trapezoid numerical integrator, as given in **Pb. 4.25**, is a first-order sys- +tem with the following parameters: + +$$ +b_0^{(1)} = \frac{\Delta x}{2} \qquad (4.10\text{a}) +$$ + +$$ +b_{1}^{(1)} = \frac{\Delta x}{2} \tag{4.10b} +$$ + +$$ +a_1^{(1)} = -1 \tag{4.10c} +$$ + +giving for its weight sequence, as per Example 2.4, the values: + +$$ +w_1(0) = \frac{\Delta x}{2} \tag{4.11a} +$$ + +$$ +w_1(i) = \Delta x \quad \text{for } i = 1, 2, 3, \dots \tag{4.11b} +$$ + +The improved numerical differentiator's weight sequence can now be +directly obtained by noting, as noted above, that if we successively cascade +integration with differentiation, we are back to the original function. Using +the results of **Pb. 2.18**, we can write: + +$$ +w(k) = \sum_{i=0}^{k} w_{2}(i)w_{1}(k-i) \quad (4.12) +$$ + +Combining the above values for $w(k)$ and $w_1(k)$, we can deduce the following equalities: + +$$ +w(0) = 1 = \frac{\Delta x}{2} w_2(0) \tag{4.13a} +$$ + +$$ +w(1) = 0 = \Delta x \left[ \frac{1}{2} w_2(1) + w_2(0) \right] \quad (4.13b) +$$ \ No newline at end of file diff --git a/samples/texts/348597/page_108.md b/samples/texts/348597/page_108.md new file mode 100644 index 0000000000000000000000000000000000000000..838cd856747552c7aa011c75b68bb89b60613583 --- /dev/null +++ b/samples/texts/348597/page_108.md @@ -0,0 +1,36 @@ +$$ w(2) = 0 = \Delta x \left[ \frac{1}{2} w_2(2) + w_2(1) + w_2(0) \right] \quad (4.13c) $$ + +etc. ... + +from which we can directly deduce the following expressions for the weight- +ing sequence {$w_2$}: + +$$ w_2(0) = \frac{2}{\Delta x} \qquad (4.14a) $$ + +$$ w_2(i) = \frac{4}{\Delta x} (-1)^i \quad \text{for } i=1,2,3,... \qquad (4.14b) $$ + +From these weights we can compute, as per the results of Example 2.4, the +parameters of the difference equation for the improved numerical differenti- +ator, namely: + +$$ b_0^{(2)} = \frac{2}{\Delta x} \qquad (4.15a) $$ + +$$ b_1^{(2)} = -\frac{2}{\Delta x} \qquad (4.15b) $$ + +$$ a_1^{(2)} = 1 \qquad (4.15c) $$ + +giving for $D(k)$ the following defining difference equation: + +$$ D(k) = \frac{2}{\Delta t} [y(k) - y(k-1)] - D(k-1) \quad (4.16) $$ + +In **Pb. 4.32** and in other cases, you can verify that indeed this is an +improved numerical differentiator. We shall, later in the chapter, use the +above expression for $D(k)$ in the numerical solution of ordinary differential +equations. + +**In-Class Exercises** + +**Pb. 4.31** Find the inverse system corresponding to the discrete system gov- +erned by the difference equation: + +$$ y(k) = u(k) - \frac{1}{2}u(k-1) + \frac{1}{3}y(k-1) $$ \ No newline at end of file diff --git a/samples/texts/348597/page_109.md b/samples/texts/348597/page_109.md new file mode 100644 index 0000000000000000000000000000000000000000..ee5d9afd59eb387f071ef6e698183a55d5dfccbc --- /dev/null +++ b/samples/texts/348597/page_109.md @@ -0,0 +1,32 @@ +Pb. 4.32 Compute numerically the derivative of the function + +$$y = x^3 + 2x^2 + 5 \text{ in the interval } 0 \le x \le 1$$ + +using the difference equations for both *d*(k) and *D*(k) for different values of Δx. Comparing the numerical results with the analytic results, compute the errors in both methods. + +Application + +In this application, we make use of the improved differentiator and corresponding integrator (Trapezoid rule) for modeling FM modulation and demodulation. The goal is to show that we retrieve back a good copy of the original message, using the first-order iterators, thus validating the use of these expressions in other communication engineering problems, where reliable numerical algorithms for differentiation and integration are needed in the simulation of different modulation-demodulation schemes. + +As pointed out in Pb. 3.35, the FM modulated signal is given by: + +$$u_{\text{FM}}(t) = A_c \cos\left(2\pi f_c t + 2\pi k_f \int_{-\infty}^{t} m(\tau) d\tau\right) \quad (4.17)$$ + +The following *script M-file* details the steps in the FM modulation, if the signal in some normalized unit is given by the expression: + +$$m(t) = \operatorname{sinc}(10t) \qquad (4.18)$$ + +Assuming that in the same units, we have $f_c = k_f = 25$. + +The second part of the program follows the demodulation process: the phase of the modulated signal is unwrapped, and the demodulated signal is obtained by differentiating this phase, while subtracting the carrier phase, which is linear in time. + +```matlab +fc=25;kf=25;tlowb=-1;tupb=1; +t=tlowb:0.0001:tupb; +p=Tlength(t); +dt=(tupb-tlowb)/(p-1); +m=sinc(10*t); +subplot(2,2,1) +plot(t,m) +title('Message') +``` \ No newline at end of file diff --git a/samples/texts/348597/page_11.md b/samples/texts/348597/page_11.md new file mode 100644 index 0000000000000000000000000000000000000000..05ac6f7e05393dd77d8bc1a4b8fe6efe0007b2c8 --- /dev/null +++ b/samples/texts/348597/page_11.md @@ -0,0 +1,81 @@ +5.3 Optimization Methods + +5.3.1 Graphical Method + +5.3.2 Numerical Methods + +5.3.3 MATLAB *fmin* and *fmins* Built-in Function + +5.4 MATLAB Commands Review + +6. Complex Numbers + +6.1 Introduction + +6.2 The Basics + +6.2.1 Addition + +6.2.2 Multiplication by a Real or Imaginary Number + +6.2.3 Multiplication of Two Complex Numbers + +6.3 Complex Conjugation and Division + +6.3.1 Division + +6.4 Polar Form of Complex Numbers + +6.4.1 New Insights into Multiplication and Division +of Complex Numbers + +6.5 Analytical Solutions of Constant Coefficients ODE + +6.5.1 Transient Solutions + +6.5.2 Steady-State Solutions + +6.5.3 Applications to Circuit Analysis + +6.6 Phasors + +6.6.1 Phasor of Two Added Signals + +6.7 Interference and Diffraction of Electromagnetic Waves + +6.7.1 The Electromagnetic Wave + +6.7.2 Addition of Electromagnetic Waves + +6.7.3 Generalization to N-waves + +6.8 Solving ac Circuits with Phasors: The Impedance Method + +6.8.1 RLC Circuit Phasor Analysis + +6.8.2 The Infinite LC Ladder + +6.9 Transfer Function for a Difference Equation with +Constant Coefficients* + +6.10 MATLAB Commands Review + +7. Vectors + +7.1 Vectors in Two Dimensions (2-D) + +7.1.1 Addition + +7.1.2 Multiplication of a Vector by a Real Number + +7.1.3 Cartesian Representation + +7.1.4 MATLAB Representation of the Above Results + +7.2 Dot (or Scalar) Product + +7.2.1 MATLAB Representation of the Dot Product + +7.3 Components, Direction Cosines, and Projections + +7.3.1 Components \ No newline at end of file diff --git a/samples/texts/348597/page_110.md b/samples/texts/348597/page_110.md new file mode 100644 index 0000000000000000000000000000000000000000..20d51b8fb10485c9b8089ca2311c900fcd3eee6c --- /dev/null +++ b/samples/texts/348597/page_110.md @@ -0,0 +1,38 @@ +```matlab +intm=zeros(1,p); +for k=1:p-1 + intm(k+1)=intm(k)+0.5*dt*(m(k+1)+m(k)); +end +subplot(2,2,2) +plot(t,intm) +title('Modulation Phase') + +uc=exp(j*(2*pi*fc*t+2*pi*kf*intm)); +u=real(uc); +phase=unwrap(angle(uc))-2*pi*fc*t; +subplot(2,2,3) +plot(t,u) +axis([-0.15 0.15 -1 1]) +title('Modulated Signal') + +Dphase(1)=0; +for k=1:p-1 + Dphase(k+1)=(2/dt)*(phase(k+1)-phase(k))- + Dphase(k); +end + +md=Dphase/(2*pi*kf); +subplot(2,2,4) +plot(t,md) +title('Reconstructed Message') +``` + +As can be observed by examining Figure 4.1, the results of the simulation are very good, giving confidence in the expressions of the iterators used. + +## 4.6 A Better Numerical Integrator: Simpson's Rule + +Prior to discussing Simpson's rule for integration, we shall derive, for a simple case, an important geometrical result. + +**THEOREM** + +The area of a parabolic segment is equal to 2/3 of the area of the circumscribed parallelogram. \ No newline at end of file diff --git a/samples/texts/348597/page_111.md b/samples/texts/348597/page_111.md new file mode 100644 index 0000000000000000000000000000000000000000..1b524b37f7d162a31afe3d79697a3242ce6f420b --- /dev/null +++ b/samples/texts/348597/page_111.md @@ -0,0 +1,15 @@ +**FIGURE 4.1** + +Simulation of the modulation and demodulation of an FM signal. + +PROOF We prove this general theorem in a specialized case, for the purpose of making the derivation simple; however, the result is true for the most general case. Referring to Figure 4.2, we want to show that the area bounded by the x-axis and the parabola is equal to 2/3 the area of the ABCD rectangle. Now the details: + +The parabola in Figure 4.2 is described by the equation: + +$$y=ax^2+b \tag{4.19}$$ + +It intersects the x-axis at the points $(-(-b/a)^{1/2}, 0)$ and $((-b/a)^{1/2}, 0)$, and the y-axis at the point $(0, b)$. The area bounded by the x-axis and the parabola is then simply the following integral: + +$$\int_{-(-b/a)^{1/2}}^{(-b/a)^{1/2}} (ax^2 + b)dx = \frac{4}{3} \frac{b^{3/2}}{(-a)^{1/2}} \tag{4.20}$$ + +The area of the ABCD rectangle is: $b(2(-b/a)^{1/2}) = \frac{2b^{3/2}}{(-a)^{1/2}}$, which establishes the theorem. \ No newline at end of file diff --git a/samples/texts/348597/page_112.md b/samples/texts/348597/page_112.md new file mode 100644 index 0000000000000000000000000000000000000000..d263f318f00dc9ac60f0a9598f1dd71dca4534cf --- /dev/null +++ b/samples/texts/348597/page_112.md @@ -0,0 +1,5 @@ +FIGURE 4.2 +A parabolic segment and its circumscribed parallelogram. + +FIGURE 4.3 +The first two slices in the Simpson's rule construct. AH = HG = $\Delta x$. \ No newline at end of file diff --git a/samples/texts/348597/page_113.md b/samples/texts/348597/page_113.md new file mode 100644 index 0000000000000000000000000000000000000000..5c83cc84df7b946c24af17f851001bb239abb3bb --- /dev/null +++ b/samples/texts/348597/page_113.md @@ -0,0 +1,42 @@ +Simpson's Algorithm: We shall assume that the interval of integration is sampled at an odd number of points (2N + 1), so that we have an even number of intervals. The algorithm groups the intervals in pairs. + +Referring to Figure 4.3, the points A, H, and G are the first three points in the sampled x-interval. The assumption underlying Simpson's rule is that the curve passing through the points B, D, and F, on the curve of the integrand, can have their locations approximated by a parabola. The line CDE is tangent to this parabola at the point D. + +Under the above approximation, the value of the integral of the y-function between the points A and G is then simply the sum of the area of the trapezoid ABFG plus 2/3 the area of the parallelogram BCEF, namely: + +$$ +\begin{aligned} +\text{Area of the first two slices} &= \Delta x(y(1) + y(3)) + \frac{4\Delta x}{3}\left(y(2) - \frac{y(1)+y(3)}{2}\right) \\ +&= \frac{\Delta x}{3}(y(1) + 4y(2) + y(3)) +\end{aligned} +\quad (4.21) +$$ + +In a similar fashion, we can find the area of the third and fourth slices, + +$$ \text{Area of the third and fourth slices} = \frac{\Delta x}{3} (y(3) + 4y(4) + y(5)) \quad (4.22) $$ + +Continuing for each successive pair of slices, we obtain for the total integral, or total area of all slices, the expression: + +$$ \text{Total area of all slices} = \frac{\Delta x}{3} \left[ y(1) + 4y(2) + 2y(3) + 4y(4) + 2y(5) + \dots \right. \quad (4.23) $$ + +$$ \dots + 4y(2N) + y(2N+1) $$ + +that is, the weights are equal to 1 for the first and last elements, equal to 4 for even elements, and equal to 2 for odd elements. + +**Example 4.7** + +Using Simpson's rule, compute the integral of sin(x) over the interval 0 ≤ x ≤ π. + +Solution: Edit and execute the following script M-file: + +```matlab +a=0;b=pi;N=4; +x=linspace(a,b,2*N+1); +y=sin(x); +for k=1:2*N+1 + if k==1 | k==2*N+1 + w(k)=1; + end +end +``` \ No newline at end of file diff --git a/samples/texts/348597/page_114.md b/samples/texts/348597/page_114.md new file mode 100644 index 0000000000000000000000000000000000000000..883dba083336a456cbf49dcc91054cffd76986e7 --- /dev/null +++ b/samples/texts/348597/page_114.md @@ -0,0 +1,32 @@ +```matlab + elsif rem(k,2)==0 + w(k)=4; + else + w(k)=2; + end +end + +Intsimp=((b-a)/(3*(length(x)-1)))*sum(y.*w) +``` + +Now compare the above answer with the one you obtain if you use the Trap- +ezoid rule, by entering the command: `Inttrapz=trapz(x,y)`. + +**In-Class Exercise** + +**Pb. 4.33** In the above derivation of Simpson's method, we constructed the algorithm by determining the weights sequence. Reformulate this algorithm into an equivalent iterator format. + +**Homework Problems** + +In this chapter, we surveyed three numerical techniques for computing the integral of a function. We observed that the different methods lead to different levels of accuracy. In Section 6.8, we derive formulas for estimating the accuracy of the different methods discussed here. However, and as noted previously, more accurate techniques than those presented here exist for calculating integrals numerically; many of these are in the MATLAB library and are covered in numerical analysis courses. In particular, familiarize yourself, using the help folder, with the commands **quad** and **quad8**. + +**Pb. 4.34** The goal of this problem, using the **quad8** command, is to develop a function *M-file* for the Gaussian distribution function of probability theory. +The Gaussian probability density function is given by: + +$$f_X(x) = \frac{1}{(2\pi)^{1/2} \sigma_X} \exp \left[ - \frac{(x - a_X)^2}{2\sigma_X^2} \right]$$ + +where $-\infty < a_X < \infty$, $0 < \sigma_X$ are constants, and are equal to the mean and the square root of the variance of $x$, respectively. + +The Gaussian probability distribution function is defined as: + +$$F_X(x) = \int_{-\infty}^{x} f_X(\zeta) d\zeta$$ \ No newline at end of file diff --git a/samples/texts/348597/page_115.md b/samples/texts/348597/page_115.md new file mode 100644 index 0000000000000000000000000000000000000000..f56b8d2536e9357640fb45e28835d595e54a921b --- /dev/null +++ b/samples/texts/348597/page_115.md @@ -0,0 +1,31 @@ +Through a change of variable (specify it!), the Gaussian probability distribution function can be written as a function of the normalized distribution function, + +$$F_X(x) = F\left(\frac{x - a_X}{\sigma_X}\right)$$ + +where + +$$F(x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{x} \exp\left(-\frac{\xi^2}{2}\right) d\xi$$ + +a. Develop the *function M-file* for the normal distribution function. + +b. Show that for negative values of x, we have: + +$$F(-x) = 1 - F(x)$$ + +c. Plot the normalized distribution function for values of x in the interval $0 \le x \le 5$. + +**Pb. 4.35** The computation of the arc length of a curve can be reduced to a one-dimensional integration. Specifically, if the curve is described parametrically, then the arc length between the adjacent points $(x(t), y(t), z(t))$ and the point $(x(t + \Delta t), y(t + \Delta t), z(t + \Delta t))$ is given by: + +$$\Delta s = \sqrt{\left(\frac{dx}{dt}\right)^2 + \left(\frac{dy}{dt}\right)^2 + \left(\frac{dz}{dt}\right)^2} \Delta t$$ + +giving immediately for the arc length from $t_0$ to $t_1$, the expression: + +$$s = \int_{t_0}^{t_1} \sqrt{\left(\frac{dx}{dt}\right)^2 + \left(\frac{dy}{dt}\right)^2 + \left(\frac{dz}{dt}\right)^2} dt$$ + +a. Calculate the arc length of the curve described by: $x = t^2$ and $y = t^3$ between the points: $t = 0$ and $t = 3$. + +b. Assuming that a 2-D curve is given in polar coordinates by $r = f(\theta)$, and then noting that: + +$$x = f(\theta) \cos(\theta) \quad \text{and} \quad y = f(\theta) \sin(\theta)$$ + +use the above expression for the arc length (here the parameter is $\theta$) to derive the formula for the arc length in polar coordinates to be \ No newline at end of file diff --git a/samples/texts/348597/page_116.md b/samples/texts/348597/page_116.md new file mode 100644 index 0000000000000000000000000000000000000000..d00c684b338b7d64f0a4e33889e08df78f567fd6 --- /dev/null +++ b/samples/texts/348597/page_116.md @@ -0,0 +1,13 @@ +$$s = \int_{\theta_0}^{\theta_1} \sqrt{r^2 + \left(\frac{dr}{d\theta}\right)^2} d\theta$$ + +c. Use the result of (b) above to derive the length of the cardioid $r = a(1 + \cos(\theta))$ between the angles $0$ and $\pi$. + +**Pb. 4.36** In **Pb. 3.27**, you plotted the Fermi-Dirac distribution. This curve represents the average population of fermions in a state with energy $\epsilon$ (ignore for the moment the internal quantum numbers of the fermions). As you would have noticed, this quantity is always smaller or equal to one. This is a manifestation of Pauli's exclusion principle, which states that no two fermions can be simultaneously in the same state. This, of course, means that even at zero absolute temperature, the momentum of almost all fermions is not zero; that is, we cannot freeze the thermal motion of all electrons at absolute zero. This fact is fundamental to our understanding of metals and semiconductors, and will be the subject of detailed studies in courses on physical electronics. + +In nature, on the other hand, there is another family of particles that behaves quite the opposite; they are called Bosons. These particles are not averse to occupying the same state; moreover, they have a strong affinity, under the proper conditions, to aggregate in the lowest energy state available. When this happens, we say that the particles formed a Bose condensate. This phenomenon has been predicted theoretically to occur both on the laboratory scale and in some astrophysical objects (called neutron stars). The phenomena of superconductivity, superfluidity, and pion condensation, which occur in condensed or supercondensed matter, are manifestations of Bose condensates; however, it was only recently that this phenomenon has been observed to also occur experimentally in gaseous systems of atoms that were cooled in a process called laser cooling. The details of the cooling mechanism do not concern us at the moment, but what we seek to achieve in this problem is an understanding of the fashion in which the number density (i.e., the number per unit volume) of the condensate can become macroscopic. To achieve this goal, we shall use the skills that you have developed in numerically integrating and differentiating functions. + +The starting point of the analysis is a formula that you will derive in future courses in statistical physics; it states that the number of particles in the condensate (i.e., the atoms in the gas that have momentum zero) can be written, for a noninteracting Bosons system, as: + +$$n_{\text{condensate}} = n - \frac{1}{\lambda_T^3} g_{3/2}(z)$$ + +where $\lambda_T$ is a quantity proportional to $T^{-1/2}$, $n$ is the total number density, and the second term on the RHS of the equation represents the number density of the particles not in the condensate (i.e., those particles whose momentum is not zero). The function $g_{3/2}(z)$ is defined such that: \ No newline at end of file diff --git a/samples/texts/348597/page_117.md b/samples/texts/348597/page_117.md new file mode 100644 index 0000000000000000000000000000000000000000..65b67c598e271f77157f9a316ee1ef57ad094eb4 --- /dev/null +++ b/samples/texts/348597/page_117.md @@ -0,0 +1,21 @@ +$$g_{3/2}(z) = z \frac{\partial}{\partial z} g_{5/2}(z)$$ + +where + +$$g_{5/2}(z) = - \frac{4}{\sqrt{\pi}} \int_{0}^{\infty} dx x^2 \ln(1 - z \exp(-x^2))$$ + +and $z$, for physical reasons, always remains in the interval $0 < z \le 1$. + +a. Plot $g_{5/2}(z)$ as a function of $z$ over the interval $0 < z \le 1$. + +b. Plot $g_{3/2}(z)$ over the same interval and find its maximum value. + +c. As $n$ increases or $T$ decreases, the second term on the rhs of the population equation keeps adjusting the value of $z$ so that the two terms on the RHS cancel each other, thus keeping $n_{condensate} = 0$. However, at some point, $z$ reaches the value 1, which is its maximum value and the second term on the RHS cannot increase further. At this point, $n_{condensate}$ starts building up with any increase in the total number density. The value of the total density at which this starts happening is called the threshold value for the condensate formation. Prove that this threshold is given by: $n^{\text{threshold}} \lambda_T^3 = 2.612$. + +## 4.7 Numerical Solutions of Ordinary Differential Equations + +Ordinary linear differential equations are of the form: + +$$a_n(t) \frac{d^n y}{dt^n} + a_{n-1}(t) \frac{d^{n-1} y}{dt^{n-1}} + \dots + a_1(t) \frac{dy}{dt} + a_0(t) y = u(t) \quad (4.24)$$ + +The *a*'s are called the coefficients and $u(t)$ is called the source (or input) term. Ordinary differential equations (ODEs) show up in many problems of electrical engineering, particularly in circuit problems where, depending on the circuit element, the potential across it may depend on the deposited charge, the current (which is the time derivative of the charge), or the derivative of the current (i.e., the second time derivative of the charge); that is, in the same equation, we may have a function and its first- and second-order derivatives. To focus this discussion, let us start by writing the potential difference across the passive elements of circuit theory. Specifically, the voltage drops across a resistor, capacitor, or inductor are given as follows: \ No newline at end of file diff --git a/samples/texts/348597/page_118.md b/samples/texts/348597/page_118.md new file mode 100644 index 0000000000000000000000000000000000000000..8d87ff93327150b0579ea2d5269ed5de5c0a51e8 --- /dev/null +++ b/samples/texts/348597/page_118.md @@ -0,0 +1,26 @@ +1. The voltage across a resistor is given, using Ohm's law, by: + +$$V_R(t) = RI(t) \quad (4.25)$$ + +where R is the resistance, I is the current, and where R is measured in Ohms. + +2. The voltage across a capacitor is proportional to the magnitude of the charge that accumulates on either plate, that is: + +$$V_C(t) = \frac{Q(t)}{C} = \frac{\int I(t) dt}{C} \qquad (4.26)$$ + +The second equality reflects the relation of the current to the charge. C is the capacitance and, as previously pointed out, is measured in Farads. + +3. The voltage across an inductor can be deduced from Lenz's law, which stipulates that the voltage across an inductor is proportional to the time derivative of the current going through it: + +$$V_L(t) = L \frac{dI(t)}{dt} \quad (4.27)$$ + +where L is the inductance and is measured in Henrys. + +From these expressions for the voltage drop across each of the passive elements in a circuit, and using the Kirchoff voltage law, it is then an easy matter to write down the differential equations describing, for example, a series RC or an RLC circuit. + +**RC Circuit:** Referring to the RC circuit diagram in Figure 4.4, the differential equation describing the voltage across the capacitor is given by: + +$$RC \frac{dV_C}{dt} + V_C = V_s(t) \quad (4.28)$$ + +FIGURE 4.4 +RC circuit with an ac source. \ No newline at end of file diff --git a/samples/texts/348597/page_119.md b/samples/texts/348597/page_119.md new file mode 100644 index 0000000000000000000000000000000000000000..4c96fbd2ad9392dcc37a99f679e29424ffbd7778 --- /dev/null +++ b/samples/texts/348597/page_119.md @@ -0,0 +1,26 @@ +FIGURE 4.5 +RLC circuit with ac source. + +**RLC Circuit:** Referring to the RLC circuit in Figure 4.5, the voltage across the capacitor is described by the ODE: + +$$ \text{LC} \frac{d^2 V_c}{dt^2} + \text{RC} \frac{dV_c}{dt} + V_c = V_s(t) \quad (4.29) $$ + +Numerically solving these and other types of ODEs will be the subject of the remainder of this section. In Section 4.7.1, we consider first-order iterators to represent the different-order derivatives, apply this algorithm to solve the above types of problems, and conclude by pointing out some of the limitations of this algorithm. In Section 4.7.2, we discuss higher-order iterators, particularly the Runge-Kutta technique. In Section 4.7.3, we familiarize ourselves with the use of standard MATLAB solvers for ODEs. + +### 4.7.1 First-Order Iterator + +In Section 4.5, we found an improved expression for the numerical differen- +tiator, $D(k)$: + +$$ D(k) = \frac{2}{\Delta t} [y(k) - y(k-1)] - D(k-1) \quad (4.16) $$ + +which functionally corresponded to the inverse of the Trapezoid rule for integration. (Note that the independent variable here is *t*, and not *x*.) + +Applying this first-order differentiator in cascade leads to an expression for the second-order differentiator, namely: + +$$ +\begin{align} +D2(k) &= \frac{2}{\Delta t} [D(k) - D(k-1)] - D2(k-1) \tag{4.30} \\ +&= \frac{4}{(\Delta t)^2} [y(k) - y(k-1)] - \frac{4}{\Delta t} D(k-1) - D2(k-1) +\end{align} + $$ \ No newline at end of file diff --git a/samples/texts/348597/page_12.md b/samples/texts/348597/page_12.md new file mode 100644 index 0000000000000000000000000000000000000000..15d3589e140167b143c8532ffb0f61d3b34b3418 --- /dev/null +++ b/samples/texts/348597/page_12.md @@ -0,0 +1,77 @@ +7.3.2 Direction Cosines + +7.3.3 Projections + +7.4 The Dirac Notation and Some General Theorems* + +7.4.1 Cauchy-Schwartz Inequality + +7.4.2 Triangle Inequality + +7.5 Cross Product and Scalar Triple Product* + +7.5.1 Cross Product + +7.5.2 Geometric Interpretation of the Cross Product + +7.5.3 Scalar Triple Product + +7.6 Vector Valued Functions + +7.7 Line Integral + +7.8 Infinite Dimensional Vector Spaces* + +7.9 MATLAB Commands Review + +## 8. Matrices + +8.1 Setting up Matrices + +8.1.1 Creating Matrices in MATLAB + +8.2 Adding Matrices + +8.3 Multiplying a Matrix by a Scalar + +8.4 Multiplying Matrices + +8.5 Inverse of a Matrix + +8.6 Solving a System of Linear Equations + +8.7 Application of Matrix Methods + +8.7.1 dc Circuit Analysis + +8.7.2 dc Circuit Design + +8.7.3 ac Circuit Analysis + +8.7.4 Accuracy of a Truncated Taylor Series + +8.7.5 Reconstructing a Function from Its Fourier Components + +8.7.6 Interpolating the Coefficients of an (n - 1)-degree Polynomial from n Points + +8.7.7 Least-Square Fit of Data + +8.8 Eigenvalues and Eigenvectors* + +8.8.1 Finding the Eigenvalues of a Matrix + +8.8.2 Finding the Eigenvalues and Eigenvectors Using MATLAB + +8.9 The Cayley-Hamilton and Other Analytical Techniques* + +8.9.1 Cayley-Hamilton Theorem + +8.9.2 Solution of Equations of the Form $\frac{dX}{dt} = AX$ + +8.9.3 Solution of Equations of the Form $\frac{dX}{dt} = AX + B(t)$ + +8.9.4 Pauli Spinors + +8.10 Special Classes of Matrices* + +8.10.1 Hermitian Matrices \ No newline at end of file diff --git a/samples/texts/348597/page_120.md b/samples/texts/348597/page_120.md new file mode 100644 index 0000000000000000000000000000000000000000..3b601f7248e1fd1525b61fee0ea4b0342c20ede9 --- /dev/null +++ b/samples/texts/348597/page_120.md @@ -0,0 +1,36 @@ +**Example 4.8** + +Find the first-order iterative scheme to solve the first-order differential equation given by: + +$$a(t) \frac{dy}{dt} + b(t)y = u(t) \quad (4.31)$$ + +with the initial condition $y(t_1)$ specified. + +**Solution:** Substituting Eq. (4.16) for the numerical differentiator in the differential equation, we deduce the following first-order difference equation for $y(k)$: + +$$y(k) = \left[ \frac{2a(k)}{\Delta t} + b(k) \right]^{-1} \left[ \frac{2a(k)y(k-1)}{\Delta t} + a(k)D(k-1) + u(k) \right] \quad (4.32)$$ + +to which we should add, in the numerical subroutine, the expression for the first-order differentiator $D(k)$ as given by Eq. (4.16). The initial condition for the function at the origin of time, specify the first elements of the $y$ and $D$ arrays: + +$$\begin{align*} +y(1) &= y(t = t_1) \\ +D(1) &= (1 / a(1))[u(1) - b(1)y(1)] +\end{align*}$$ + +## Application + +To illustrate the use of the above algorithm, let us solve, over the interval $0 \le t \le 6$, for the potential across the capacitor in an RC circuit with an ac source; that is, + +$$a \frac{dy}{dt} + y = \sin(2\pi t) \quad (4.33)$$ + +where $a = RC$ and $y(t = 0) = 0$. + +**Solution:** Edit and execute the following *script M-file*, for $a = 1/(2\pi)$: + +``` +tin=0; +tfin=6; +t=linspace(tin,tfin,3000); +N=T-length(t); +y zeros(1,N); +``` \ No newline at end of file diff --git a/samples/texts/348597/page_121.md b/samples/texts/348597/page_121.md new file mode 100644 index 0000000000000000000000000000000000000000..9c8c8f2ac562736f752b82fcfe23291b74e15363 --- /dev/null +++ b/samples/texts/348597/page_121.md @@ -0,0 +1,41 @@ +$$ +\begin{aligned} +dt &= (t_{\text{fin}} - t_{\text{in}}) / (N - 1); \\ +u &= \sin(t); \\ +a &= (1 / (2 * \pi)) * \text{ones}(1, N); \\ +b &= \text{ones}(1, N); \\ +y(1) &= 0; \\ +D(1) &= (1 / a(1)) * (u(1) - b(1) * y(1)); \\ +\multicolumn{5}{l}{\text{for } k=2:N} \\ +y(k) &= ((2 * a(k) / dt + b(k)) ^ {(-1)}) * \dots \\ + &((2 * a(k) * y(k-1) / dt + a(k) D(k-1) + u(k))); \\ +D(k) &= (2/dt) * (y(k) - y(k-1)) - D(k-1); \\ +\multicolumn{5}{l}{\end{aligned}} +$$ + +In-Class Exercise + +Pb. 4.37 Plot the amplitude of y, and its dephasing from u, as a function of +a for large t. + +**Example 4.9** + +Find the first-order iterative scheme to solve the second-order differential equation given by: + +$$ +a(t) \frac{d^2 y}{dt^2} + b(t) \frac{dy}{dt} + c(t)y = u(t) \quad (4.34) +$$ + +with initial conditions $y(t = 0)$ and $\left.\frac{dy}{dt}\right|_{t=0}$ given. + +Solution: Substituting the above first-order expression of the iterators for the first-order and second-order numerical differentiators [respectively Eqs. (4.16) and (4.30), into Eq. (4.34)], we deduce the following iterative equation for y(k): + +$$ +\begin{equation} +\begin{split} +y(k) = {}& \left\{ 4 \frac{a(k)}{(\Delta t)^2} + 2 \frac{b(k)}{\Delta t} + c(k) \right\}^{-1} \times \\ + & \left\{ y(k-1) \left[ 4 \frac{a(k)}{(\Delta t)^2} + 2 \frac{b(k)}{\Delta t} \right] + D(k-1) \left[ 4 \frac{a(k)}{\Delta t} + b(k) \right] + a(k)D_2(k-1) + u(k) \right\} +\end{split} +\tag{4.35} +\end{equation} +$$ \ No newline at end of file diff --git a/samples/texts/348597/page_122.md b/samples/texts/348597/page_122.md new file mode 100644 index 0000000000000000000000000000000000000000..877f4d042a7a6a9c15d28b4b2aa7ea71cea4e522 --- /dev/null +++ b/samples/texts/348597/page_122.md @@ -0,0 +1,35 @@ +This difference equation will be supplemented in the ODE numerical solver routine with the iterative equations for $D(k)$ and $D2(k)$, as given respectively by Eqs. (4.16) and (4.30), and with the initial conditions for the function and its derivative. The first elements for the $y$, $D$, and $D2$ arrays are given by: + +$$y(1) = y(t = 0)$$ + +$$D(1) = \left. \frac{dy}{dt} \right|_{t=0}$$ + +$$D2(1) = (1 / a(1))(-b(1)D(1) - c(1)y(1) + u(1))$$ + +## Application 1 + +To illustrate the use of the first-order iterator algorithm in solving a second-order ordinary differential equation, let us find, over the interval $0 \le t \le 16\pi$, the voltage across the capacitance in an RLC circuit, with an ac voltage source. This reduces to solve the following ODE: + +$$a \frac{d^2 y}{dt^2} + b \frac{dy}{dt} + cy = \sin(\omega t) \quad (4.36)$$ + +where $a = LC$, $b = RC$, $c = 1$. Choose in some normalized units, $a = 1$, $b = 3$, $\omega = 1$, and let $y(t = 0) = y'(t = 0) = 0$. + +**Solution:** Edit and execute the following script M-file: + +``` +tin=0; +tfin=16*pi; +t=linspace(tin,tfin,2000); +a=1; +b=3; +c=1; +w=1; +N LENGTH(t); +y zeros(1,N); +dt=(tfin-tin)/(N-1); +u=sin(w*t); +y(1)=0; +D(1)=0; +D2(1)=(1/a)*(-b*D(1)-c*y(1)+u(1)); +for k=2:N +``` \ No newline at end of file diff --git a/samples/texts/348597/page_123.md b/samples/texts/348597/page_123.md new file mode 100644 index 0000000000000000000000000000000000000000..cbe207756f5d9f65733cc0b74887cdaa12432d6c --- /dev/null +++ b/samples/texts/348597/page_123.md @@ -0,0 +1,41 @@ +$$ +\begin{aligned} +y(k) &= \left( (4 * a / dt^2 + 2 * b / dt + c) ^ (-1) \right) * \dots \\ +&= (y(k-1) * (4 * a / dt^2 + 2 * b / dt) + D(k-1) * (4 * a / dt + b) + \dots \\ +&\qquad + a * D2(k-1) + u(k)); \\ +D(k) &= (2 / dt) * (y(k) - y(k-1)) - D(k-1); \\ +D2(k) &= (4 / dt^2) * (y(k) - y(k-1)) - (4 / dt) * D(k-1) - D2(k-1); \\ +\end{aligned} +$$ + +end + +plot(t,y,t,u,'--') + +The dashed curve is the temporal profile of the source term. + +## In-Class Exercise + +Pb. 4.38 Plot the amplitude of y and its dephasing from u as function of a for large t, for $0.1 < a < 5$. + +## Application 2 + +Solve, over the interval $0 < t < 1$, the following second-order differential equation: + +$$ (1 - t^2) \frac{d^2 y}{dt^2} - 2t \frac{dy}{dt} + 20y = 0 \quad (4.37) $$ + +with the initial conditions: $y(t=0) = 3/8$ and $y'(t=0) = 0$. + +Then, compare your numerical result with the analytical solution to this problem: + +$$ y = \frac{1}{8}(35t^4 - 30t^2 + 3) \quad (4.38) $$ + +**Solution:** Edit and execute the following script M-file: + +``` +tin=0; +tfin=1; +t=linspace(tin,tfin,2000); +N Service +a=1-t.^2; +``` \ No newline at end of file diff --git a/samples/texts/348597/page_124.md b/samples/texts/348597/page_124.md new file mode 100644 index 0000000000000000000000000000000000000000..baec67ac90015329c8e7269792cd80c95267ab65 --- /dev/null +++ b/samples/texts/348597/page_124.md @@ -0,0 +1,43 @@ +```matlab +b=-2*t; +c=20*ones(1,N); +y zeros(1,N); +D zeros(1,N); +dt=(tfin-tin)/(N-1); +u zeros(1,N); +y(1)=3/8; +D(1)=0; +D2(1)=(1/a(1))*(-b(1)*D(1)-c(1)*y(1)+u(1)); +``` +for k=2:N + y(k)=((4*a(k)/dt^2+2*b(k)/dt+c(k))^(-1))*... + (y(k-1)*(4*a(k)/dt^2+2*b(k)/dt)+D(k-1)*... + *(4*a(k)/dt+b(k))+a(k)*D2(k-1)+u(k)); + D(k)=(2/dt)*(y(k)-y(k-1))-D(k-1); + D2(k)=(4/dt^2)*(y(k)-y(k-1))-(4/dt)*D(k-1)-... + D2(k-1); +end + +yanal=(35*t.^4-30*t.^2+3)/8; + +plot(t,y,t,yanal,'--') + +As you will observe upon running this program, the numerical solution and +the analytical solution agree very well. + +**NOTE** The above ODE is that of the Legendre polynomial of order *l* = 4, +encountered earlier in Chapter 2, in **Pb. 2.25**. + +$$ +(1 - t^2) \frac{d^2 P_l}{dt^2} - 2t \frac{dP_l}{dt} + l(l+1)P_l = 0 \quad (4.39) +$$ + +where + +$$ +P_l(-t) = (-1)^l P_l(t) \tag{4.40} +$$ + +## *Homework Problem* + +**Pb. 4.39** The above algorithms assume that the source function is continuous. If it is not, we may encounter problems upon applying this algorithm over a transition region, as will be illustrated in the following problem. \ No newline at end of file diff --git a/samples/texts/348597/page_125.md b/samples/texts/348597/page_125.md new file mode 100644 index 0000000000000000000000000000000000000000..7c8f64cdc456ff8b1a84118ebbca3e91fae2a65f --- /dev/null +++ b/samples/texts/348597/page_125.md @@ -0,0 +1,37 @@ +Solve, over the interval $0 \le t \le 20$, the following first-order differential equation for $a = 2$ and $a = 0.5$: + +$$a \frac{dy}{dt} + y = 1$$ + +where $y(0) = 0$. (Physically, this would correspond to the charging of a capacitor from a dc source connected suddenly to the battery at time zero. Here, $y$ is the voltage across the capacitor, and $a = RC$.) + +NOTE The analytic solution to this problem is $y = 1 - \exp(-t/a)$. + +### 4.7.2 Higher-Order Iterators: The Runge-Kutta Method* + +In this subsection, we want to explore the possibility that if we sampled the function n-times per step, we will obtain a more accurate solution to the ODE than that obtained from the first-order iterator for the same value of $\Delta t$. + +To focus the discussion, consider the ODE: + +$$y'(t) = f(t, y(t)) \quad (4.41)$$ + +Higher-order ODEs can be reduced, as will be shown at the end of the subsection, to a system of equations having the same functional form as Eq. (4.41). The derivation of a technique using higher-order iterators will be shown below in detail for two evaluations per step. Higher-order recipes can be found in most books on numerical methods for ODE. + +The key to the Runge-Kutta method is to properly arrange each of the evaluations in a particular step to depend on the previous evaluations in the same step. + +In the second-order model: + +if: + +$$k_1 = f(t(n), y(t(n)))(\Delta t) \quad (4.42)$$ + +then: + +$$k_2 = f(t(n) + \alpha\Delta t, y(t(n)) + \beta k_1)(\Delta t) \quad (4.43)$$ + +and + +$$y(t(n+1)) = y(t(n)) + ak_1 + bk_2 \quad (4.44)$$ + +where $a, b, \alpha$, and $\beta$ are unknown parameters to be determined. They should be chosen such that Eq. (4.44) is correct to order $(\Delta t)^3$. + +To find $a, b, \alpha$, and $\beta$, let us compute $y(t(n+1))$ in two different ways. First, Taylor expanding the function $y(t(n+1))$ to order $(\Delta t)^2$, we obtain: \ No newline at end of file diff --git a/samples/texts/348597/page_126.md b/samples/texts/348597/page_126.md new file mode 100644 index 0000000000000000000000000000000000000000..e22d62165b774551d6556dba24432f96dad6f258 --- /dev/null +++ b/samples/texts/348597/page_126.md @@ -0,0 +1,44 @@ +$$y(t(n+1)) = y(t(n)) + \frac{dy(t(n))}{dt}(\Delta t) + \frac{d^2 y(t(n))}{dt^2} \frac{(\Delta t)^2}{2} \quad (4.45)$$ + +Recalling Eq. (4.41) and the total derivative expression of a function in two variables as function of the partial derivatives, we have: + +$$\frac{dy(t(n))}{dt} = f(t(n), y(t(n))) \quad (4.46)$$ + +$$ +\begin{align} +\frac{d^2 y(t(n))}{dt^2} &= \frac{d}{dt} \left( \frac{dy(t(n))}{dt} \right) \tag{4.47} \\ +&= \frac{\partial f(t(n), y(t(n)))}{\partial t} + \frac{\partial f(t(n), y(t(n)))}{\partial y} f(t(n), y(t(n))) +\end{align} +$$ + +Combining Eqs. (4.45) to (4.47), it follows that to second order in $(\Delta t)$: + +$$ +\begin{equation} +\begin{split} +y(t(n+1)) = y(t(n)) &+ f(t(n), y(t(n)))(\Delta t) + \\ +& \left[ \frac{\partial f(t(n), y(t(n)))}{\partial t} + \frac{\partial f(t(n), y(t(n)))}{\partial y} f(t(n), y(t(n))) \right] \frac{(\Delta t)^2}{2} +\end{split} +\tag{4.48} +\end{equation} +$$ + +Next, let us Taylor expand $k_2$ to second order in $(\Delta t)$. This results in: + +$$k_2 = f(t(n) + \alpha \Delta t, y(t(n)) + \beta k_1)(\Delta t) = \left[ f(t(n), y(t(n))) + \alpha(\Delta t) \frac{\partial f(t(n), y(t(n)))}{\partial t} + (\beta k_1) \frac{\partial f(t(n), y(t(n)))}{\partial y} \right] (\Delta t) \quad (4.49)$$ + +Combining Eqs. (4.42), (4.44), and (4.49), we get the other expression for $y(t(n+1))$, correct to second order in $(\Delta t)$: + +$$ +\begin{equation} +\begin{split} +y(t(n+1)) = y(t(n)) &+ (a+b)f(t(n), y(t(n)))(\Delta t) + \\ +& + \alpha b \frac{\partial f(t(n), y(t(n)))}{\partial t} (\Delta t)^2 + b\beta \frac{\partial f(t(n), y(t(n)))}{\partial y} f(t(n), y(t(n)))(\Delta t)^2 +\end{split} +\tag{4.50} +\end{equation} +$$ + +Now, comparing Eqs. (4.48) and (4.50), we obtain the following equalities: + +$$a+b=1; \quad \alpha b = 1/2; \quad b\beta = 1 \qquad (4.51)$$ \ No newline at end of file diff --git a/samples/texts/348597/page_127.md b/samples/texts/348597/page_127.md new file mode 100644 index 0000000000000000000000000000000000000000..e05636737d7de65a69222c80be930b9e546ead72 --- /dev/null +++ b/samples/texts/348597/page_127.md @@ -0,0 +1,33 @@ +We have three equations in four unknowns; the usual convention is to fix $a = 1/2$, giving for the other quantities: + +$$b = 1/2; \quad \alpha = 1; \quad \beta = 1 \tag{4.52}$$ + +finally leading to the following expressions for the second-order iterator and its parameters: + +$$k_1 = f(t(n), y(t(n)))(\Delta t) \tag{4.53a}$$ + +$$k_2 = f(t(n) + \Delta t, y(t(n)) + k_1)(\Delta t) \tag{4.53b}$$ + +$$y(t(n+1)) = y(t(n)) + \frac{k_1 + k_2}{2} \tag{4.53c}$$ + +Next, we give, without proof, the famous fourth-order iterator Runge-Kutta expression, one of the most widely used algorithms for solving ODEs in the different fields of science and engineering: + +$$k_1 = f(t(n), y(n))(\Delta t) \tag{4.54a}$$ + +$$k_2 = f(t(n) + \Delta t / 2, y(t(n)) + k_1 / 2)(\Delta t) \tag{4.54b}$$ + +$$k_3 = f(t(n) + \Delta t / 2, y(t(n)) + k_2 / 2)(\Delta t) \tag{4.54c}$$ + +$$k_4 = f(t(n) + \Delta t, y(t(n)) + k_3)(\Delta t) \tag{4.54d}$$ + +$$y(t(n+1)) = y(t(n)) + \frac{k_1 + 2k_2 + 2k_3 + k_4}{6} \tag{4.54e}$$ + +The last point that we need to address before leaving this subsection is what to do in case we have an ODE with higher derivatives than the first. The answer is that we reduce the *n*th-order ODE to a system of *n* first-order ODEs. + +**Example 4.10** + +Reduce the following second-order differential equation into two first-order differential equations: + +$$ay'' + by' + cy = \sin(t) \tag{4.55}$$ + +with the initial conditions: $y(t=0) = 0$ and $y'(t=0) = 0$ \ No newline at end of file diff --git a/samples/texts/348597/page_128.md b/samples/texts/348597/page_128.md new file mode 100644 index 0000000000000000000000000000000000000000..7611117c93071334c4bb21228ccbcbdf33f73c52 --- /dev/null +++ b/samples/texts/348597/page_128.md @@ -0,0 +1,47 @@ +(where the prime and double primes superscripted functions refer, respectively, to the first and second derivative of this function). + +**Solution:** Introduce the two-dimensional array z, and define + +$$z(1) = y \tag{4.56a}$$ + +$$z'(2) = y' \tag{4.56b}$$ + +The system of first-order equations now reads: + +$$z'(1) = z(2) \tag{4.57a}$$ + +$$z'(2) = (1/a)(\sin(t) - bz(2) - cz(1)) \tag{4.57b}$$ + +### Example 4.11 + +Using the fourth-order Runge-Kutta iterator, numerically solve the same problem as in Application 1 following Example 4.9. + +**Solution:** Edit and save the following *function M-files*: + +```matlab +function zp=zprime(t,z) +a=1; b=3; c=1; +zp(1,1)=z(2,1); +zp(2,1)=(1/a)*(sin(t)-b*z(2,1)-c*z(1,1)); +``` + +The above file specifies the system of ODE that we are trying to solve. + +Next, in another *function M-file*, we edit and save the fourth-order Runge-Kutta algorithm, specifically: + +```matlab +function zn=prk4(t,z,dt) +k1=dt*zprime(t,z); +k2=dt*zprime(t+dt/2,z+k1/2); +k3=dt*zprime(t+dt/2,z+k2/2); +k4=dt*zprime(t+dt,z+k3); +zn=z+(k1+2*k2+2*k3+k4)/6; +``` + +Finally, edit and execute the following *script M-file*: + +```matlab +yinit=0; +ypinit=0; +z=[yinit;ypinit]; +``` \ No newline at end of file diff --git a/samples/texts/348597/page_129.md b/samples/texts/348597/page_129.md new file mode 100644 index 0000000000000000000000000000000000000000..1273a938a3bc563840472632248318085ea402c4 --- /dev/null +++ b/samples/texts/348597/page_129.md @@ -0,0 +1,42 @@ +```matlab +tinit=0; +tfin=16*pi; +N=1001; +t=linspace(tinit,tfin,N); +dt=(tfin-tinit)/(N-1); + +for k=1:N-1 + z(:,k+1)=prk4(t(k),z(:,k),dt); +end + +plot(t,z(1:,),t,sin(t),'--') +``` + +In the above plot, we are comparing the temporal profiles of the voltage difference across the capacitor with that of the source voltage. + +### 4.7.3 MATLAB ODE Solvers + +MATLAB has many ODE solvers, `ODE23` and `ODE45` being most commonly used. `ODE23` is based on a pair of second-order and third-order Runge-Kutta methods running simultaneously to solve the ODE. The program automatically corrects for the step size if the answers from the two methods have a discrepancy at any stage of the calculation that will lead to a larger error than the allowed tolerance. + +To use this solver, we start by creating a *function M-file* that includes the system of equations under consideration. This function is then called from the command window with the `ODE23` or `ODE45` command. + +#### Example 4.12 + +Using the MATLAB ODE solver, find the voltage across the capacitor in the RLC circuit of Example 4.11, and compare it to the source potential time-profile. + +**Solution:** Edit and save the following *function M-file*: + +```matlab +function zp=RLC11(t,z) +a=1; +b=3; +c=1; +zp(1,1)=z(2,1); +zp(2,1)=(1/a)*(sin(t)-b*z(2,1)-c*z(1,1)); +``` + +Next, edit and execute the following *script M-file*: + +```matlab +tspan=[0 16*pi]; +``` \ No newline at end of file diff --git a/samples/texts/348597/page_13.md b/samples/texts/348597/page_13.md new file mode 100644 index 0000000000000000000000000000000000000000..a717bc7a9769f43f4b10c8b367a63ed27d57716e --- /dev/null +++ b/samples/texts/348597/page_13.md @@ -0,0 +1,81 @@ +8.10.2 Unitary Matrices + +8.10.3 Unimodular Matrices + +8.11 MATLAB Commands Review + +## 9. Transformations + +9.1 Two-dimensional (2-D) Geometric Transformations + +9.1.1 Polygonal Figures Construction + +9.1.2 Inversion about the Origin and Reflection about the Coordinate Axes + +9.1.3 Rotation around the Origin + +9.1.4 Scaling + +9.1.5 Translation + +9.2 Homogeneous Coordinates + +9.3 Manipulation of 2-D Images + +9.3.1 Geometrical Manipulation of Images + +9.3.2 Digital Image Processing + +9.3.3 Encrypting an Image + +9.4 Lorentz Transformation* + +9.4.1 Space-Time Coordinates + +9.4.2 Addition Theorem for Velocities + +9.5 MATLAB Commands Review + +## 10. A Taste of Probability Theory* + +10.1 Introduction + +10.2 Basics + +10.3 Addition Laws for Probabilities + +10.4 Conditional Probability + +10.4.1 Total Probability and Bayes Theorems + +10.5 Repeated Trials + +10.5.1 Generalization of Bernoulli Trials + +10.6 The Poisson and the Normal Distributions + +10.6.1 The Poisson Distribution + +10.6.2 The Normal Distribution + +## Supplement: Review of Elementary Functions + +S.1 Affine Functions + +S.2 Quadratic Functions + +S.3 Polynomial Functions + +S.4 Trigonometric Functions + +S.5 Inverse Trigonometric Functions + +S.6 The Natural Logarithmic Function + +S.7 The Exponential Function + +S.8 The Hyperbolic Functions + +S.9 The Inverse Hyperbolic Functions + +## Appendix: Some Useful Formulae \ No newline at end of file diff --git a/samples/texts/348597/page_130.md b/samples/texts/348597/page_130.md new file mode 100644 index 0000000000000000000000000000000000000000..7af0134558e6ab9b7a14fc6299afd573edf4fa38 --- /dev/null +++ b/samples/texts/348597/page_130.md @@ -0,0 +1,20 @@ +FIGURE 4.6 + +The potential differences across the source (dashed line) and the capacitor (solid line) in an RLC circuit with an ac source. [LC = 1, RC = 3, and V_s = sin(t)]. + +``` +zin=[0;0]; +[t,z]=ode23('RLC11',tspan,zin); +plot(t,z(:,1),t,sin(t)) +xlabel('Normalized Time') +``` + +The results are plotted in Figure 4.6. Note the phase shift between the two potential differences. + +**Example 4.13** + +Using the MATLAB ODE solver, solve the problem of relaxation oscillations in lasers. + +**Solution:** Because many readers may not be familiar with the statement of the problem, let us first introduce the physical background to the problem. + +A simple gas laser consists of two parallel mirrors sandwiching a tube with a gas, selected for having two excitation levels separated in energy by an amount equal to the energy of the photon quantum that we are attempting to have the laser system produce. In a laser (light amplification by stimulated emission radiation), a pumping mechanism takes the atom to the upper excited level. However, the atom does not stay in this level; it decays to lower levels, including the lower excited level, which is of interest for two reasons: (1) the finite lifetime of all excited states of atoms; and (2) stimulated emission, a quantum mechanical phenomenon, associated with the statistics of the pho- \ No newline at end of file diff --git a/samples/texts/348597/page_131.md b/samples/texts/348597/page_131.md new file mode 100644 index 0000000000000000000000000000000000000000..9ccbf8c8a9c08b7bfaa29ccbc17462af083ac917 --- /dev/null +++ b/samples/texts/348597/page_131.md @@ -0,0 +1,49 @@ +tons (photons are bosons), which predicts that in the presence of an electro- +magnetic field having a frequency close to that of the frequency of the photon +emitted in the transition between the upper excited and lower excited state, +the atom emission rate is enhanced and this enhancement is larger, the more +photons that are present in its vicinity. On the other hand, the rate of change +of the number of photons is equal to the rate generated from the decay of the +atoms due to stimulated emission, minus the decay due to the finite lifetime +of the photon in the resonating cavity. Putting all this together, one is led, in +the simplest approximation, to write what are called the rate equations for the +number of atoms in the excited state and for the photon numbers in the cavity. +These coupled equations, in their simplest forms, are given by: + +$$ +\frac{dN}{dt} = P - \frac{N}{\tau_{\text{decay}}} - BnN \quad (4.58) +$$ + +$$ +\frac{dn}{dt} = - \frac{n}{\tau_{cavity}} + BnN \qquad (4.59) +$$ + +where N is the normalized number of atoms in the atom's upper excited state, +n is the normalized number of photons present, P is the pumping rate, τdecay is +the atomic decay time from the upper excited state, due to all effects except +that of stimulated emission, τcavity is the lifetime of the photon in the resonant +cavity, and B is the Einstein coefficient for stimulated emission. + +These nonlinear differential equations describe the dynamics of laser oper- +ation. Now come back to relaxation oscillations in lasers, which is the prob- +lem at hand. Physically, this is an interplay between *N* and *n*. An increase in +the photon number causes an increase in stimulated emission, which causes +a decrease in the population of the higher excited level. This, in turn, causes +a reduction in the photon gain, which tends to decrease the number of pho- +tons present, and in turn, decreases stimulated emission. This leads to the +build-up of the higher excited state population, which increases the rate of +change of photons, with the cycle resuming but such that at each new cycle +the amplitude of the oscillations is dampened as compared with the cycle just +before it, until finally the system reaches a steady state. + +To compute the dynamics of the problem, we proceed into two steps. First, +we generate the *function M-file* that contains the rate equations, and then pro- +ceed to solve these ODEs by calling the MATLAB ODE solver. We use typical +numbers for gas lasers. + +Specifically the function M-file representing the laser rate equations is +given by: + +function yp=laser1(t,y) +p=30; %pumping rate +gamma=10^(-2); %inverse natural lifetime \ No newline at end of file diff --git a/samples/texts/348597/page_132.md b/samples/texts/348597/page_132.md new file mode 100644 index 0000000000000000000000000000000000000000..92860f14c816a9da187b3967cd01697163d94ec0 --- /dev/null +++ b/samples/texts/348597/page_132.md @@ -0,0 +1,34 @@ +B=3; %stimulated emission coefficient +c=30; %inverse lifetime of photon in cavity + +yp(1,1)=p-gamma*y(1,1)-B*y(1,1)*y(2,1); +yp(2,1)=-c*y(2,1)+B*y(1,1)*y(2,1); + +The *script M-file* to compute the laser dynamics and thus simulate the relaxation oscillations is: + +tspan=[0 3]; +yin=[1 1]; +[t,y]=ode23('laser1',tspan,yin); + +subplot(3,1,1) +plot(t,y(:,1)) +xlabel('Normalized Time') +ylabel('N') + +subplot(3,1,2); +plot(t,y(:,2)) +xlabel('Normalized Time') +ylabel('n') + +subplot(3,1,3); +plot(y(:,1),y(:,2)) +xlabel('N') +ylabel('n') + +As can be observed in Figure 4.7, the oscillations, as predicted, damp-out after a while and the dynamical variables reach a steady state. The phase diagram, shown in the bottom panel, is an alternate method to show how the population of the atomic higher excited state and the photon number density reach the steady state. + +**Question:** Compute analytically from Eqs. (4.58) and (4.59), the steady-state values for the higher excited state population and for the photon number, and compare with the numerically obtained asymptotic values. + +## In-Class Exercise + +**Pb. 4.40** By changing the values of the appropriate parameters in the above programs, find separately the effects of increasing or decreasing the value of \ No newline at end of file diff --git a/samples/texts/348597/page_133.md b/samples/texts/348597/page_133.md new file mode 100644 index 0000000000000000000000000000000000000000..26328700ea8d8c8e95eb0a64864b78661ef8c0c8 --- /dev/null +++ b/samples/texts/348597/page_133.md @@ -0,0 +1,11 @@ +$t_{cavity}$, and the effect of the pumping rate on the magnitude and the persistence of the oscillation. + +**FIGURE 4.7** + +The dynamics of a laser in the relaxation oscillations regime. Top panel: Plot of the higher excited level atoms population as a function of the normalized time. Middle panel: Plot of the number of photons as a function of the normalized time. Bottom panel: Phase diagram of the photons number vs. the higher excited level atoms population. + +**Example 4.14** + +Using the rate equations developed in Example 4.13, simulate the Q-switching of a laser. + +*Solution:* First, an explanation of the statement of the problem. In Example 4.13, we showed how, following an initial transient period whereby one observes relaxation oscillations, a laser, in the presence of steady pumping, reaches steady-state operation after a while. This is referred to as continuous wave (cw) operation of the laser. In this example, we shall explore the other mode of laser operation, the so-called pulsed mode. In this regime, the experimentalist, through a temporary modification in the absorption properties of the laser resonator, prevents the laser from oscillating, thus leading the higher excited state of the atom to keep building up its population to a very high level before it is allowed to emit any photons. Then, at the desired \ No newline at end of file diff --git a/samples/texts/348597/page_134.md b/samples/texts/348597/page_134.md new file mode 100644 index 0000000000000000000000000000000000000000..cfce8ab9c7b3bf0b2dfd778415229a6d94477421 --- /dev/null +++ b/samples/texts/348597/page_134.md @@ -0,0 +1,9 @@ +FIGURE 4.8 + +The temporal profile of the photon burst emitted in a Q-switched laser for different initial values of the excited level atoms population. Top panel: N(0) = 50. Middle panel: N(0) = 100. Bottom panel: N(0) = 300. + +moment, the laser resonator is allowed back to a state where the optical losses of the resonator are small, thus triggering the excited atoms to dump their stored energy into a short burst of photons. It is this regime that we propose to study in this example. + +The laser dynamics are, of course, still described by the rate equations [i.e., Eqs. (4.58) and (4.59)]. What we need to modify from the previous problem are the initial conditions for the system of coupled ODE. At the origin of time [i.e., $t=0$ or the triggering time, $N(0)$], the initial value of the population of the higher excited state of the atom is in this instance (because of the induced build-up) much larger than that of the corresponding photon population $n(0)$. Figure 4.8 shows the ensuing dynamics for the photon population for different values of $N(0)$. We assumed in these simulations the following values for the parameters in the laser1 function M-file ($p=0$; $B=3$; $c=100$; $\gamma_{\text{ma}}=0.01$). + +In examining Figure 4.8, we observe that as $N(0)$ increases, the pulse's total energy increases — as it should since more energy is stored in the excited atoms. Furthermore, the duration of the produced pulse (i.e., the width of the pulse temporal profile) narrows, and the delay in the position of its peak from the trigger time gets to be smaller as the number of the initial higher excited level atoms increases. \ No newline at end of file diff --git a/samples/texts/348597/page_135.md b/samples/texts/348597/page_135.md new file mode 100644 index 0000000000000000000000000000000000000000..b69124cdbce36f38a26e79e0928b33b820047de9 --- /dev/null +++ b/samples/texts/348597/page_135.md @@ -0,0 +1,7 @@ +*In-Class Exercise* + +**Pb. 4.41** Investigate how changes in the values of $\tau_{cavity}$ and $\tau_{decay}$ modify the duration of the produced pulse. Plot the Q-switched pulse duration as function of each of these variables. + +## 4.8 MATLAB Commands Review + +
diffTakes the difference between consecutive elements in an array.
ode23 and ode45Ordinary Differential Equations solvers.
prodFinds the product of all the elements belonging to an array.
quad and quad8Integrate a function between fixed limits.
semologyPlot a graph with the abscissa in linear scale, while the ordinate is in a logarithmic scale.
sumSums all the elements of an array.
trapzFinds the integral using the Trapezoid rule.
\ No newline at end of file diff --git a/samples/texts/348597/page_136.md b/samples/texts/348597/page_136.md new file mode 100644 index 0000000000000000000000000000000000000000..a20641ec01b8c31eea451f76d04a347300a312b2 --- /dev/null +++ b/samples/texts/348597/page_136.md @@ -0,0 +1,19 @@ +# 5 + +## Root Solving and Optimization Methods + +In this chapter, we first learn some elementary numerical techniques and the use of the `fsolve` and `fzero` commands from the MATLAB library to obtain the real roots (or zeros) of an arbitrary function. Then, we discuss the use of the MATLAB command `roots` for finding all roots of a polynomial. Following this, we consider the Golden Section method and the `fmin` and `fmins` MATLAB commands for optimizing (finding the minimum or maximum value of a function) over an interval. Our discussions pertain exclusively to problems with one and two variables (input) and do not include the important problem of optimization with constraints. + +### 5.1 Finding the Real Roots of a Function + +This section explores the different categories of techniques for finding the real roots (zeros) of an arbitrary function. We outline the required steps for computing the zeros using the graphical commands, the numerical techniques known as the Direct Iterative and the Newton-Raphson methods, and the built-in `fsolve` and `fzero` functions of MATLAB. + +#### 5.1.1 Graphical Method + +In the graphical method, we find the zeros of a single variable function by implementing the following steps: + +1. Plot the particular function over a suitable domain. + +2. Identify the neighborhoods where the curve crosses the x-axis (there may be more than one point); and at each such point, the following steps should be independently implemented. + +3. Zoom in on the neighborhood of each intersection point by repeated application of the MATLAB `axis` or `zoom` commands. \ No newline at end of file diff --git a/samples/texts/348597/page_137.md b/samples/texts/348597/page_137.md new file mode 100644 index 0000000000000000000000000000000000000000..abe87e95c701d5f42051aae9c1abc13a836ef826 --- /dev/null +++ b/samples/texts/348597/page_137.md @@ -0,0 +1,25 @@ +4. Use the crosshair of the **ginput** command to read the coordinates of the intersection. + +In problems where we desire to find the zeros of a function that depends on two input variables, we follow (conceptually) the same steps above, but use 3-D graphics. + +## In-Class Exercises + +**Pb. 5.1** Find graphically the two points in the x-y plane where the two surfaces, given below, intersect: + +$$z_1 = 7 - \sqrt{25 + x^2 + y^2}$$ + +$$z_2 = 4 - 2x - 4y$$ + +*(Hint: Use the techniques of surface and contour renderings, detailed in Chapter 1, to plot the zero height contours for both surfaces; then read off the intersections of the resulting curves.)* + +**Pb. 5.2** Verify your graphical answer to **Pb. 5.1** with that you would obtain analytically. + +### 5.1.2 Numerical Methods + +This chapter subsection briefly discusses two techniques for finding the zeros of a function in one variable, namely the Direct Iterative and the Newton-Raphson techniques. We do not concern ourselves too much, at this point, with an optimization of the routine execution time, nor with the inherent limits of each of the methods, except in the most general way. Furthermore, to avoid the inherent limits of these techniques in some pathological cases, we assume that we plot each function under consideration, verify that it crosses the x-axis, and satisfy ourselves in an empirical way that there does not seem to be any pathology around the intersection point before we embark on the application of the following algorithms. These statements will be made more rigorous to you in future courses in numerical analysis. + +#### 5.1.2.1 The Direct Iterative Method + +This is a particularly useful technique when the equation $f(x) = 0$ can be cast in the form: + +$$x = F(x) \tag{5.1}$$ \ No newline at end of file diff --git a/samples/texts/348597/page_138.md b/samples/texts/348597/page_138.md new file mode 100644 index 0000000000000000000000000000000000000000..cdd8b68c31da3665ca31412fa0da1f7b5fcb049b --- /dev/null +++ b/samples/texts/348597/page_138.md @@ -0,0 +1,33 @@ +$F(x)$ is then called an iteration function, and it can be used for the generation of the sequence: + +$$x_{k+1} = F(x_k) \tag{5.2}$$ + +To guarantee that this method gives accurate results in a specific case, the function should be continuous and it should satisfy the contraction condition: + +$$|F(x_n) - F(x_m)| \le s|x_n - x_m| \tag{5.3}$$ + +where $0 \le s < 1$; that is, the changes in the value of the function are smaller than the changes in the value of the arguments. To prove that under these conditions, the iterative function possesses a fixed point (i.e., that ultimately the difference between two successive iterations can be arbitrarily small) that can be immediately obtained from the above contraction condition [Eq. (5.3)]. + +PROOF Let the $x_{ guess }$ be the first term in the iteration, then: + +$$|F(x_1) - F(x_{\text{guess}})| \le s|x_1 - x_{\text{guess}}| \tag{5.4}$$ + +but since + +$$F(x_{\text{guess}}) = x_1 \quad \text{and} \quad F(x_1) = x_2 \tag{5.5}$$ + +then + +$$|x_2 - x_1| \le s|x_1 - x_{\text{guess}}| \tag{5.6}$$ + +Similarly, + +$$|F(x_2) - F(x_1)| \le s|x_2 - x_1| \tag{5.7}$$ + +translates into + +$$|x_3 - x_2| \le s|x_2 - x_1| \le s^2|x_1 - x_{\text{guess}}| \tag{5.8}$$ + +The argument can be extended to the $(m+1)$-iteration, where we can assert that: + +$$|x_{m+1} - x_m| \le s^m |x_1 - x_{\text{guess}}| \tag{5.9}$$ \ No newline at end of file diff --git a/samples/texts/348597/page_139.md b/samples/texts/348597/page_139.md new file mode 100644 index 0000000000000000000000000000000000000000..0adeb6922208102a9627b81bda3ad259a331bbb2 --- /dev/null +++ b/samples/texts/348597/page_139.md @@ -0,0 +1,46 @@ +but, because *s* is a non-negative number smaller than 1, the right-hand-side of the inequality in Eq. (5.9) can be made, for large enough value of *m*, arbitrarily small, and the above iterative procedure does indeed converge to a fixed point. + +**Example 5.1** + +Find the zero of the function: + +$$ +y = x - \sin(x) - 1 \tag{5.10} +$$ + +Solution: At the zero, the iterative form can be written as: + +$$ +x(k) = \sin(x(k-1)) + 1 \quad (5.11) +$$ + +The contraction property, required for the application of this method, is valid +in this case because the difference between two sines is always smaller than +the difference between their arguments. The fixed point can then be obtained +by the following MATLAB program: + +x(1)=1; %value of the initial guess +for k=2:20 +x(k)=sin(x(k-1))+1; +end + +If we display the successive values of x, we obtain: + +x +Ans +1.0000 1.8415 1.9636 1.9238 1.9383 1.9332 1.9350 +1.9344 1.9346 1.9345 1.9346 1.9346 1.9346 1.9346 +1.9346 1.9346 1.9346 1.9346 1.9346 1.9346 1.9346 + +As can be noticed from the above printout, about 11 iterations were required +to get the value of the fixed point accurate to one part per 10,000. + +NOTE A more efficient technique to find the answer within a proscribed error tolerance is to write the program with the while command, where we can specify the tolerance level desired. + +5.1.2.2 The Newton-Raphson Method + +This method requires a knowledge of both the function and its derivative. +The method makes use of the geometrical interpretation of the derivative +being the tangent at a particular point, and that the tangent is the limit of the +chord between two close points on the curve. It is based on the fact that if $f(x_1)$ +and $f(x_2)$ have opposite signs and the function $f$ is continuous on the interval \ No newline at end of file diff --git a/samples/texts/348597/page_14.md b/samples/texts/348597/page_14.md new file mode 100644 index 0000000000000000000000000000000000000000..f8c9999dcc8836241d3e3c6747182520331136e6 --- /dev/null +++ b/samples/texts/348597/page_14.md @@ -0,0 +1,5 @@ +Addendum: MATLAB 6 + +Selected References + +*The asterisk indicates more advanced material that may be skipped in a first reading. \ No newline at end of file diff --git a/samples/texts/348597/page_140.md b/samples/texts/348597/page_140.md new file mode 100644 index 0000000000000000000000000000000000000000..967fb81fa9d1545b1c2ad229600ecaa89eda7178 --- /dev/null +++ b/samples/texts/348597/page_140.md @@ -0,0 +1,27 @@ +[x₁, x₂], we know from the Intermediate Value theorem of calculus that there is at least one value x_c between x₁ and x₂, such that f(x_c) = 0. A sufficient condition for this method to work is that f'(x) and f''(x) have constant sign on an open interval that contains the solution f(x) = 0; in that case, any starting point that is close enough to the solution will give successive Newton's approximations that converge to the solution. + +Let $x_{\text{guess}}$ and $x$ have the same meaning as in the iterative method; therefore, $f(x) = 0$, and the definition of the derivative results in the equation: + +$$x = x_{\text{guess}} - \frac{f(x_{\text{guess}})}{f'(x_{\text{guess}})} \quad (5.12)$$ + +This relation can now be the basis of an iterative function given by: + +$$x(k) = x(k-1) - \frac{f(x(k-1))}{f'(x(k-1))} \quad (5.13)$$ + +The fixed point can be obtained, in general, for the same initial guess and tolerance, in a smaller number of iterations in the Newton-Raphson method than in the Direct Iteration method. + +## In-Class Exercise + +**Pb. 5.3** Write a routine to find the zero of the function $y = x - \sin(x) - 1$ using the Newton-Raphson algorithm. + +**Pb. 5.4** Compare the answers from the present algorithm with that of the Direct Iterative method, at each iteration step, in the search for the zeros of the function $y = x - \sin(x) - 1$, and comment on which of the two methods appears to be more effective and converges faster. + +## Example 5.2 + +Apply the Newton-Raphson method to find the voltage-current relation in a diode circuit with an ac voltage source. + +**Solution:** The diode is a nonlinear semiconductor electronics device with a voltage current curve that is described, for voltage values larger than the reverse breakdown potential (a negative quantity), by: + +$$i = I_s (e^{v/kT} - 1) \quad (5.14)$$ + +where $I_s$ is the reverse saturation current (which is typically on the order of 10⁻⁶ mA), and $kT$ is the average thermal energy of an electron divided by its \ No newline at end of file diff --git a/samples/texts/348597/page_141.md b/samples/texts/348597/page_141.md new file mode 100644 index 0000000000000000000000000000000000000000..790e3ccab229b9054f03e883715748b4cda5e0da --- /dev/null +++ b/samples/texts/348597/page_141.md @@ -0,0 +1,22 @@ +FIGURE 5.1 +The diode semi-rectifier circuit. + +charge at the diode operating temperature (equal to 1/40 V at room temperature). An important application of this device is to use it as a rectifier (a device that passes the current in one direction only). (Can you think of a practical application for this device?) + +The problem we want to solve is to find the current through the circuit (shown in Figure 5.1) as a function of time if we are given a sinusoidal time-dependent source potential. + +The other equation, in addition to Eq. (5.14) that we need in order to set the problem, is Ohm's law across R. This law, as previously noted, states that the current passing through a resistor is equal to the potential difference across the resistor, divided by the value of the resistance: + +$$i = \frac{V_s - v}{R} \qquad (5.15)$$ + +Eliminating the current from Eqs. (5.14) and (5.15), we obtain a nonlinear equation in the potential across the diode. Solving this problem is then reduced to finding the roots of the function *f* defined as: + +$$f(v) = I_s [\exp(v / k'T) - 1] - \left( \frac{V_s - v}{R} \right) \qquad (5.16)$$ + +where the potential across the diode is the unknown. + +In the Newton-Raphson method, we also need for our iteration the derivative of this function: + +$$f'(v) = \left(\frac{1}{k'T}\right) I_s \exp\left(\frac{v}{k'T}\right) + \frac{1}{R} \qquad (5.17)$$ + +For a particular value of $V_s$, we need to determine *v* and, from this value of the potential across the diode, we can determine the current in the circuit. However, because we are interested in obtaining the current through \ No newline at end of file diff --git a/samples/texts/348597/page_142.md b/samples/texts/348597/page_142.md new file mode 100644 index 0000000000000000000000000000000000000000..9df58b7d2899c684aa02b77091e4c23a014666f5 --- /dev/null +++ b/samples/texts/348597/page_142.md @@ -0,0 +1,43 @@ +the diode for a source potential that is a function of time, we need to repeat +the Newton-Raphson iteration for each of the different source voltage val- +ues at the different times. The sequence of the computation would proceed +as follows: + +1. Generate the time array. + +2. Generate the source potential for the different elements in the time array. + +3. For each time array entry, find the potential across the diode using the Newton-Raphson method. + +4. Obtain the current array from the potential array. + +5. Plot the source potential and the current arrays as a function of the time array. + +Assuming that the source potential is given by: + +$$ +V_s = V_0 \sin(2\pi ft) \tag{5.18} +$$ + +and that *f* = 60 Hz, *V*₀ = 5 V, *k*T = 0.025 V, *R* = 500 Ω, and the saturation current *I*s is 10-6 mA; the following *script M-file* finds the current in this circuit: + +Is=10^(-9); +R=500; +kT=1/40; +f=60; +V0=5; +t=linspace(0,2/f,600); +L LENGTH(t); +K=200; +Vs=(V0*sin(2*pi*t*f))'*ONES(1,K); +v=ZEROS(L,K); +i=ZEROS(L,K); + +for k=1:k-1 +v(:,k+1)=v(:,k)-(Is*(exp((1/kT)*v(:,k))-1)-... + (1/R)*(Vs(:,k)-v(:,k)))/... + ((1/kT)*Is*exp((1/kT)*v(:,k))+1/R); +i(:,k+1)=(Vs(:,k+1)-v(:,k+1))/R; +end + +plot(t,1000*i(:,K),'b',t,Vs(:,K),'g') \ No newline at end of file diff --git a/samples/texts/348597/page_143.md b/samples/texts/348597/page_143.md new file mode 100644 index 0000000000000000000000000000000000000000..d69aaa8414624ef7e838a874687c7884da6d9cc1 --- /dev/null +++ b/samples/texts/348597/page_143.md @@ -0,0 +1,34 @@ +The current (expressed in mA) and the voltage (in V) of the source will +appear in your graph window when you execute this program. + +**Homework Problem** + +Pb. 5.5 The apparent simplicity of the Newton-Raphson method is very misleading, suffice it to say that some of the original work on fractals started with examples from this model. + +a. State, to the best of your ability, the conditions that the function, its derivative, and/or the original guess should satisfy so that this iterate converges to the correct limit. Supplement your arguments with geometric sketches that illustrate each of the pathologies. + +b. Show that the Newton-Raphson method iterates cannot find the zero of the function: + +$$y = \sqrt{x-3}$$ + +c. Illustrate, with a simple sketch, the reason that this method does not work in part (b). + +5.1.3 MATLAB fsolve and fzero Built-in Functions + +Next, we investigate the use of the MATLAB command **fsolve** for finding +the zeros of any function. We start with a function of one variable. + +The recommended sequence of steps for finding the zeros of a function is +as follows: + +1. Edit a *function M-file* for the function under consideration. + +2. Plot the curve of the function over the appropriate domain, and estimate the values of the zeros. + +3. Using each of the estimates found in (2) above as an initial "guess," + use the command `fsolve` to accurately find each of the roots. The + syntax is as follows: + +`xroot=fsolve('funname',xguess)` + +**NOTE** Actually, the MATLAB command **fzero** is quite suitable for finding the zero of a function of one variable. However, we used **fsolve** in the text above because it can only be used for the two-variables problem. \ No newline at end of file diff --git a/samples/texts/348597/page_144.md b/samples/texts/348597/page_144.md new file mode 100644 index 0000000000000000000000000000000000000000..728d08b6d62fdb15d45dcff548428ff4cc1b84fb --- /dev/null +++ b/samples/texts/348597/page_144.md @@ -0,0 +1,32 @@ +In the following application, we use the command `fzero` to find the zeros of a Bessel function, and learn in the process some important facts about this often-used special function of applied mathematics. + +**Application** + +Bessel functions are solutions to Bessel's differential equations of order $n$, given by: + +$$x^2 \frac{d^2 y}{dx^2} + x \frac{dy}{dx} + (x-n)y = 0 \quad (5.19)$$ + +There are special types of Bessel functions referred to as "of the first, second, and third kinds." Bessel functions of integer order appear, *inter alia*, in the expression of the radiation field in cylindrically shaped resonant cavities, and in light diffraction from a circular hole. Bessel functions of half-integer indices (see **Pb. 2.26**) appear in problems of spherical cavities and scattering of electromagnetic waves. Airy functions, a member of the Bessel functions family, appear in a number of important problems of optics and quantum mechanics. + +The recursion formula that relates the Bessel function of any kind of a certain order with those of the same kind of adjacent orders is: + +$$2nZ_n(x) = xZ_{n-1}(x) + xZ_{n+1}(x) \quad (5.20)$$ + +where $Z_n(x)$ is the generic designation for all kinds of Bessel functions. + +In this application, we concern ourselves only with the Bessel function of the first kind, usually denoted by $J_1(x)$. Its MATLAB call command is **besselj(n,x)**. In the present problem, we are interested in the root structure of the Bessel function of the first kind and of zero order. + +In the program that follows, we call the Bessel function from the MATLAB library; however, we could have generated it ourselves using the techniques of Section 4.7 because we know the ODE that it satisfies, and its value and that of its derivative at $x=0$, namely: + +$$J_0(x=0) = 1 \quad \text{and} \quad J'_0(x=0) = 0$$ + +The problem that we want to solve is to find the zeros of $J_0(x)$ and compare to these exact values those obtained from the approximate expression: + +$$x_{0,k} \approx \frac{\pi}{4}(4k-1) + \frac{1}{2\pi(4k-1)} - \frac{31}{6\pi^3(4k-1)^3} + \frac{3779}{15\pi^5(4k-1)^5} + \dots \quad (5.21)$$ + +To implement this task, edit and execute the following *script M-file*: + +``` +for k=1:10 + p(k)=4*k-1; +``` \ No newline at end of file diff --git a/samples/texts/348597/page_145.md b/samples/texts/348597/page_145.md new file mode 100644 index 0000000000000000000000000000000000000000..add828706df28565a2349333c8e590606cbfdd99 --- /dev/null +++ b/samples/texts/348597/page_145.md @@ -0,0 +1,21 @@ +```matlab +x0(k)=fzero('besselj(0,x)',(pi/4)*p(k)); +x0approx(k)=(pi/4)*p(k)+(1/(2*pi))*(p(k)^(-1))-... +(31/6)*(1/pi^3)*(p(k)^(-3))+... +(3779/15)*(1/pi^5)*(p(k)^(-5)); +end + +kk=1:10; +subplot(2,1,1); +plot(kk,x0,'o') +title('Zeros of Zero Order BesselJ Function') +subplot(2,1,2); +semilogy(kk,x0-x0approx,'o') +title('Error in Approximate Values of the Zeros') +``` + +As you can easily observe by examining Figure 5.2, the approximate series is suitable for calculating all (except the smallest) zeros of the function $J_0(x)$ correctly to at least five digits. + +FIGURE 5.2 + +The first ten zeros of the Bessel function $J_0(x)$. Top panel: The values of the successive zeros (roots) of $J_0(x)$. Bottom panel: Deviation in the values of these zeros between their exact expressions and their approximate values as given in Eq. (5.21). \ No newline at end of file diff --git a/samples/texts/348597/page_146.md b/samples/texts/348597/page_146.md new file mode 100644 index 0000000000000000000000000000000000000000..476092da9c788aee7b4973edbb3c300b37faecc2 --- /dev/null +++ b/samples/texts/348597/page_146.md @@ -0,0 +1,39 @@ +**In-Class Exercises** + +In each of the following problems, find the zeros of the following functions over the interval [0, 5]. + +Pb. 5.6 $f(x) = x^2 + 1$. (Alert: Applying `fsolve` blindly could lead you into trouble!) + +Pb. 5.7 $f(x) = \sin^2(x) - 1/2$. Compare your answer with the analytical result. + +Pb. 5.8 $f(x) = 2 \sin^2(x) - x^2$ + +Pb. 5.9 $f(x) = x - \tan(x)$ + +**Zeros of a Function in Two Variables** + +As previously noted, the power of the MATLAB `fsolve` function really shines in evaluating the roots of multivariable functions. + +**Example 5.3** + +Find the intersection in the *x*-*y* plane of the parabola and the plane given in Pb. 5.1. + +*Solution:* We follow these steps: + +1. Use the **contour** command to estimate the coordinates of the points of intersection of the surfaces in the *x*-*y* plane. + +2. Construct the *function M-file* for two functions ($z_1, z_2$) having two inputs (*x*, *y*): + +```matlab +function farray=funname(array) +x=array(1); +y=array(2); +farray(1)=7-sqrt(25+x.^2+y.^2); +farray(2)=4-2*x-4*y; +``` + +3. Use the approximate value found in step 1 as the value for the guess array; for example: + +`xyguess=[4 -1];` + +4. Finally, use the `fsolve` command to accurately find the root. The syntax is: \ No newline at end of file diff --git a/samples/texts/348597/page_147.md b/samples/texts/348597/page_147.md new file mode 100644 index 0000000000000000000000000000000000000000..7d2fc5eb6b0db6f0a45c8febbb79f13f67c50b6a --- /dev/null +++ b/samples/texts/348597/page_147.md @@ -0,0 +1,56 @@ +``` +xyroots=fsolve('funname',xyguess) +xyroots = + 4.7081 -1.3541 +``` + +5. To find the second root, use the second value of `xyguess`, which is the estimate of the other root, obtained from an examination of the contour plot in step 1 of the `fsolve` command: + +``` +xyguess=[-4 2]; +xyroots=fsolve('funname',xyguess) +xyroots = +-3.9081 2.9541 +``` + +This method can be extended to any number of variables and nonlinear equations, but the estimate of the roots becomes much more difficult and we will not go into further details here. + +## In-Class Exercises + +Find the values of x and y that simultaneously satisfy each pair of the following equations: + +Pb. 5.10 +$$ +\begin{cases} +z_1 = 0 = x^3 + 2y - 3 \\ +z_2 = 0 = x^2 + 3y^2 - 4 +\end{cases} +$$ + +Pb. 5.11 +$$ +\begin{cases} +z_1 = 0 = \sin^3(x^2 + y) + y^2 - x - 27/4 \\ +z_2 = 0 = x^2 + 3y^2 - 31 +\end{cases} +$$ + +Pb. 5.12 +$$ +\begin{cases} +z_1 = 0 = x^{3/2} + (y-3)^2 - 12 \\ +z_2 = 0 = x + y - 9 +\end{cases} +$$ + +Pb. 5.13 +$$ +\begin{cases} +z_1 = 0 = \tan(x) - \sqrt{y} \\ +z_2 = 0 = \sin^2(x) - \frac{y}{4} - \frac{1}{4} +\end{cases} +$$ + +## 5.2 Roots of a Polynomial + +While the analytical expressions for the roots of quadratic, cubic, and quartic equations are known, in general, the roots of higher-order polynomials can- \ No newline at end of file diff --git a/samples/texts/348597/page_148.md b/samples/texts/348597/page_148.md new file mode 100644 index 0000000000000000000000000000000000000000..a12358004420d047c81760fed45195b95463d1c8 --- /dev/null +++ b/samples/texts/348597/page_148.md @@ -0,0 +1,38 @@ +not be found analytically. MATLAB has a built-in command that finds all the +roots (real and complex) for any polynomial equation. As previously noted, +the MATLAB command for finding the polynomial roots is **roots**: + +`r=roots(p)` + +In interpreting the results from this command, recall the Fundamental Theorem of Algebra, which states the root properties of a polynomial of degree *n* with real coefficients: + +1. The $n^{\text{th}}$ polynomial admits $n$ complex roots. + +2. Complex roots come in conjugate pairs. [If you are not familiar with complex numbers and with the term complex conjugate (the latter term should pique your curiosity), be a little patient. Help is on the way; Chapter 6 covers the topic of complex numbers]. + +Inversely, knowing the roots, we can reassemble the polynomial. The com- +mand is **poly**. + +`poly(r)` + +*In-Class Exercise* + +**Pb. 5.14** Find the roots of the polynomial $p = [1 \ 3 \ 2 \ 1 \ 0 \ 3]$, and compute their sum and product. + +**Pb. 5.15** Consider the two polynomials: + +$p_1 = [1 \ 3 \ 2 \ 1 \ 0 \ 3]$ and $p_2 = [3 \ 2 \ 1]$ + +Find the value(s) of $x$ at which the curves representing these polynomials +would intersect. + +**Pb. 5.16** Find the constants $A, B, C, D$, and $a, b, c, d$ that permits the following expansion in partial fractions: + +$$ +\frac{1}{x^4 - 25x^2 + 144} = \frac{A}{(x-a)} + \frac{B}{(x-b)} + \frac{C}{(x-c)} + \frac{D}{(x-d)} +$$ + +5.3 Optimization Methods + +Many design problems call for the maximization or minimization (optimiza- +tion) of a particular function belonging to a particular domain. (Recall the \ No newline at end of file diff --git a/samples/texts/348597/page_149.md b/samples/texts/348597/page_149.md new file mode 100644 index 0000000000000000000000000000000000000000..19c7d8e4f24dea93371306703430b8ea2cba5bc6 --- /dev/null +++ b/samples/texts/348597/page_149.md @@ -0,0 +1,27 @@ +resistor circuit [Figure 3.1] in which we wanted to find the maximum power delivered to a load resistor.) In this section, we will learn the simple Golden Section rule and the use of the `fmin` command to solve the simplest forms of this problem. The important class of problems related to optimizing a function, while satisfying a number of constraints, will be left to more advanced courses. + +Let us start by reminding ourselves of some terms definitions: The *domain* is the set of elements to which a function assigns values. The *range* is the set of values thus obtained. + +**DEFINITION** Let $I$, the domain of the function $f(x)$, contain the point $c$. We say that: + +1. $f(c)$ is the maximum value of the function on $I$ if $f(c) \geq f(x)$ for all $x \in I$. + +2. $f(c)$ is the minimum value of the function on $I$ if $f(c) \leq f(x)$ for all $x \in I$. + +3. An extremum is the common designation for either the maximum value or the minimum value. + +Using the above definitions, we note that the maximum (minimum) may appear at an endpoint of the interval $I$, or possibly in the interior of the interval: + +* If a maximum (minimum) appears at an endpoint, we describe this extreme point as an endpoint maximum (minimum). + +* If a maximum (minimum) appears in the interior of the interval, we describe this extreme point as a local maximum (minimum). + +* The largest (smallest) value among the maximum (minimum) values (either endpoint or local) is called the global maximum (minimum) and is the object of our search. + +We note, in passing, the equivalence of finding the local extremum of a function with finding the zeros of the derivative of this function. The following methods are suitable when this direct method is not suitable due to a number of practical complications. + +As with finding the zeros of a function, in this instance we will also explore the graphical method, the simple numerical method, and the MATLAB built-in commands for finding the extremum. + +### 5.3.1 Graphical Method + +In the graphical method, in steps very similar to those described in Section 5.1.1 for finding the zeros of a single variable function, we follow these steps: \ No newline at end of file diff --git a/samples/texts/348597/page_15.md b/samples/texts/348597/page_15.md new file mode 100644 index 0000000000000000000000000000000000000000..6a4bf1cb94aca5ce14802f7cafecd533fede1303 --- /dev/null +++ b/samples/texts/348597/page_15.md @@ -0,0 +1,18 @@ +# 1 + +*Introduction to MATLAB® and +Its Graphics Capabilities* + +## 1.1 Getting Started + +MATLAB can be thought of as a library of programs that will prove very useful in solving many electrical engineering computational problems. MATLAB is an ideal tool for numerically assisting you in obtaining answers, which is a major goal of engineering analysis and design. This program is very useful in circuit analysis, device design, signal processing, filter design, control system analysis, antenna design, microwave engineering, photonics engineering, computer engineering, and all other sub-fields of electrical engineering. It is also a powerful graphic and visualization tool. + +The first step in using MATLAB is to know how to call it. It is important to remember that although the front-end and the interfacing for machines with different operating systems are sometimes different, once you are inside MATLAB, all programs and routines are written in the same manner. Only those few commands that are for file management and for interfacing with external devices such as printers may be different for different operating systems. + +After entering MATLAB, you should see the prompt `>>`, which means the program interpreter is waiting for you to enter instructions. (Remember to press the Return key at the end of each line that you enter.) + +Now type `clf`. This command creates a graph window (if one does not already exist) or clears an existing graph window. + +Because it is impossible to explain the function of every MATLAB command within this text, how would you get information on a certain command syntax? The MATLAB program has extensive help documentation available with simple commands. For example, if you wanted help on a function called `roots` (we will use this function often), you would type `help roots`. + +Note that the help facility cross-references other functions that may have related uses. This requires that you know the function name. If you want an idea of the available help files in MATLAB, type `help`. This gives you a list of topics included in MATLAB. To get help on a particular topic such as the Optimization Toolbox, type `help toolbox/optim`. This gives you a list of \ No newline at end of file diff --git a/samples/texts/348597/page_150.md b/samples/texts/348597/page_150.md new file mode 100644 index 0000000000000000000000000000000000000000..5ed94957e32d2889cfd7714fb47c2f1d08a5c850 --- /dev/null +++ b/samples/texts/348597/page_150.md @@ -0,0 +1,25 @@ +1. Plot the particular function over the defined domain. + +2. Examine the plot to determine whether the extremum is an end-point extremum or a local extremum. + +3. Zoom in on the neighborhood of the so-identified extremum by repeated application of the MATLAB `axis` or `zoom` commands. + +4. Use the cross hair of the `ginput` command to read the coordinates of the extremum. [Be especially careful here. Extra caution is prompted by the fact that the curve is flat (its tangent is parallel to the x-axis) at a local extremum; thus, you may need to re-plot the curve in the neighborhood of this extremum to find, through visual means, accurate results for the coordinates of the extremum. There may be too few points in the original plot for the zooming technique to provide more than a very rough approximation.] + +## In-Class Exercises + +Find, graphically, for each of the following exercises, the coordinates of the global maximum and the global minimum for the following curves in the indicated intervals. Specify the nature of the extremum. + +Pb. 5.17 $y = f(x) = \exp(-x^2)$ on $-4 < x < 4$ + +Pb. 5.18 $y = f(x) = \exp(-x^2) \sin^2(x)$ on $-4 < x < 4$ + +Pb. 5.19 $y = f(x) = \exp(-x^2) [x^3 + 2x + 3]$ on $-4 < x < 4$ + +Pb. 5.20 $y = f(x) = 2 \sin(x) - x$ on $0 < x < 2\pi$ + +Pb. 5.21 $y = f(x) = \sqrt{1 + \sin(x)}$ on $0 < x < 2\pi$ + +### 5.3.2 Numerical Methods + +We discuss now the Golden Section method for evaluating the position of the local minimum of a function and its value at this minimum. We assume that we have plotted the function and have established that such a local minimum exists. Our goal at this point is to accurately pinpoint the position and value of this minimum. We detail the derivation of an elementary technique for this search: the Golden Section method. More accurate and efficient techniques for this task have been developed. These are incorporated in the built-in command `fmin`; the mode of use is discussed in Section 5.3.3. \ No newline at end of file diff --git a/samples/texts/348597/page_151.md b/samples/texts/348597/page_151.md new file mode 100644 index 0000000000000000000000000000000000000000..8ad8b3bb6af31329a23465141a9737089a4940fa --- /dev/null +++ b/samples/texts/348597/page_151.md @@ -0,0 +1,27 @@ +### 5.3.2.1 Golden Section Method + +Assume that, by examining the graph of the function under consideration, you have established the local minimum $x_{\min} \in [a, b]$. This means that the curve of the function is strictly decreasing in the interval $[a, x_{\min}]$ and is strictly increasing in the interval $[x_{\min}, b]$. Next, choose a number $r < 1/2$, but whose precise value will be determined later, and define the internal points c and d such that: + +$$c = a + r(b - a) \tag{5.22}$$ + +$$d = a + (1-r)(b-a) \tag{5.23}$$ + +and such that $a < c < d < b$. Next, evaluate the values of the function at c and d. If we find that $f(c) \ge f(d)$, we can assert that $x_{\min} \in [c, b]$; that is, we narrowed the external bounds of the interval. (If the inequality was in the other sense, we could have instead narrowed the outer limit from the right.) If in the second iteration, we fix the new internal points such that the new value of c is the old value of d, then all we have to compute at this step is the new value of d. If we repeat the same iteration k-times, until the separation between c and d is smaller than the desired tolerance, then at that point we can assert that: + +$$x_{\min} = \frac{c(k) + d(k)}{2} \tag{5.24}$$ + +Now, let us determine the value of r that will allow the above iteration to proceed as described. Translating the above statements into equations, we desire that: + +$$c(2) = d(1) \tag{5.25}$$ + +$$\Rightarrow c(2) = a(2) + r(b(2) - a(2)) = a(1) + (1-r)(b(1) - a(1))$$ + +$$a(2) = c(1) = a(1) + r(b(1) - a(1)) \tag{5.26}$$ + +$$b(2) = b(1) \tag{5.27}$$ + +Now, replacing the values of $a(2)$ and $b(2)$ from Eqs. (5.26) and (5.27) into Eq. (5.25), we are led to a second-degree equation in r: + +$$r^2 - 3r + 1 = 0 \tag{5.28}$$ + +The desired root is the value of the Golden ratio: \ No newline at end of file diff --git a/samples/texts/348597/page_152.md b/samples/texts/348597/page_152.md new file mode 100644 index 0000000000000000000000000000000000000000..2b1893467d59644c22688838274aa7da63a0741f --- /dev/null +++ b/samples/texts/348597/page_152.md @@ -0,0 +1,38 @@ +$$r = \frac{3 - \sqrt{5}}{2} \qquad (5.29)$$ + +and hence, the name of the method. + +The following *function M-file* implements the above algorithm: + +```matlab +function [xmin,ymin]=goldensection(funname,a,b, + tolerance) +r=(3-sqrt(5))/2; +c=a+r*(b-a); +fc=feval(funname,c); +d=a+(1-r)*(b-a); +fd=feval(funname,d); + +while d-c>tolerance + if fc>fd + dnew=c+(1-r)*(b-c); + a=c; + c=d; + fc=fd; + d=dnew; + fd=feval(funname,dnew); + else + cnew=a+r*(d-a); + b=d; + d=c; + fd=fc; + c=cnew; + fc=feval(funname,cnew); + end +end + +xmin=(c+d)/2; +ymin=feval(funname,xmin); +``` + +For example, if we wanted to find the position of the minimum of the cosine function and its value in the interval $3 < x < 3.5$, accurate to $10^{-4}$, we would enter in the command window, after having saved the above *function M-file*, the following command: \ No newline at end of file diff --git a/samples/texts/348597/page_153.md b/samples/texts/348597/page_153.md new file mode 100644 index 0000000000000000000000000000000000000000..256f540cc37099b7f3bc4b4345aa14eab6b85e8e --- /dev/null +++ b/samples/texts/348597/page_153.md @@ -0,0 +1,25 @@ +[xmin, ymin] = goldensection('cos', 3, 3.5, 10^(-4)) + +### 5.3.3 MATLAB `fmin` and `fmins` Built-in Function + +Following methodically the same steps using `fzero` to find the zeros of any function, we can use the `fmin` command to find the minimum of a function of one variable on a given interval. The recommended sequence of steps is as follows: + +1. Edit a *function M-file* for the function under consideration. + +2. Plot the curve of the function over the desired domain, to overview the function shape and have an estimate of the position of the minimum. + +3. Use the command `fmin` to accurately find the minimum. The syntax is as follows: + +`xmin=fmin('funname',a,b) % [a,b] is the interval` + +The local maximum of a function $f(x)$ on an interval can be computed by noting that this quantity can be deduced from knowing the values of the coordinates of the local minimum of $-f(x)$. The implementation of this task consists of creating a file for the negative of this function (call it **n-funname**) and entering the following commands in the command window: + +`xmax=fmin('n-funname',xi,xf)` + +`fmax=-1*feval('n-funname', xmax)` + +#### *Homework Problems* + +**Pb. 5.22** We have two posts of height 6 m and 8 m, and separated by a distance of 21 m. A line is to run from the top of one post to the ground between the posts and then to the top of the other post (Figure 5.3). Find the configuration that minimizes the length of the line. + +**Pb. 5.23** Fermat's principle states that light going from Point A to Point B selects the path which requires the least amount of travel time. Consider the situation in which an engineer in a submarine wants to communicate, using a laser-like pointer, with a detector at the top of the mast of another boat. At what angle $\theta$ to the vertical should he point his beam? Assume that the detector is 50 ft above the water surface, the submarine transmitter is 30 ft under the surface, the horizontal distance separating the boat from the submarine is 100 ft, and the velocity of light in water is 3/4 of its velocity in air (Figure 5.4). \ No newline at end of file diff --git a/samples/texts/348597/page_154.md b/samples/texts/348597/page_154.md new file mode 100644 index 0000000000000000000000000000000000000000..2bc07fdb196168b15da88114ccd27305a07191cd --- /dev/null +++ b/samples/texts/348597/page_154.md @@ -0,0 +1,5 @@ +FIGURE 5.3 +Schematics for Pb. 5.22. (ACB is the line whose length we want to minimize.) + +FIGURE 5.4 +Schematics for Pb. 5.23. A is the location of the detector at the top of the mast, B is the location of the emitter in the submarine, and BOA is the optical path of the ray of light. \ No newline at end of file diff --git a/samples/texts/348597/page_155.md b/samples/texts/348597/page_155.md new file mode 100644 index 0000000000000000000000000000000000000000..bba45c3d3ce5100f9934d29608c55795059265e8 --- /dev/null +++ b/samples/texts/348597/page_155.md @@ -0,0 +1,42 @@ +**Minimum of a Function of Two Variables** + +To find the local minimum of a multivariable function, we use the MATLAB `fmins` function. Finding the maximum can be handled by the same technique as outlined for the one variable case. + +**Example 5.4** + +Find the position of the minimum of the surface $f(x, y) = x^2 + y^2$. + +*Solution:* + +1. First, make a function file and save it as fname.m. + +```matlab +function f=fname(array) + +x=array(1); % x is stored in first element of array +y=array(2); % y is stored in second element of +%array +f=x.^2+y.^2; % function stored in f +``` + +2. Graph the contour plot for the surface; and from it, estimate the coordinates of the minimum: + +```matlab +arrayguess=[.1 .1]; + +The arrayguess holds the initial guess for both coordinates at +the minimum. That is, + +arrayguess=[xguess yguess]; +``` + +3. The coordinates of the minimum are then obtained by entering the following commands in the command window: + +```matlab +arraymin=fmins('fname',arrayguess) +fmin=feval('fname',arraymin) +``` + +**Homework Problem** + +Pb. 5.24 In this problem we propose to apply the above optimization techniques to the important problem of the optical narrow band transmission filter. This filter, in very wide use in optics, consists of two parallel semi-reflective surfaces (i.e., mirrors) with reflection coatings $R_1$ and $R_2$ and separated by a distance $L$. Assuming that the material between the mirrors has an index of refraction $n$ and that the incoming beam of light has frequency $\omega$ and is making an angle $\theta_i$ with the normal to the semi-reflective surfaces, then the ratio of the transmitted light intensity to the incident intensity is \ No newline at end of file diff --git a/samples/texts/348597/page_156.md b/samples/texts/348597/page_156.md new file mode 100644 index 0000000000000000000000000000000000000000..0b699374bc32ca518b6d38ec28436a220fd6112d --- /dev/null +++ b/samples/texts/348597/page_156.md @@ -0,0 +1,29 @@ +$$T = \frac{I_{\text{transm.}}}{I_{\text{incid.}}} = \frac{(1-R_1)(1-R_2)}{(1-\sqrt{R_1 R_2})^2 + 4\sqrt{R_1 R_2} \sin^2\left(\pi \frac{\omega}{\omega_0}\right)}$$ + +where $\omega_0 = \frac{\pi c}{nL \cos(\theta_t)}$, $\sin(\theta_t) = n \sin(\theta)$, and $\theta_t$ is the angle that the transmitted light makes with the normal to the mirror surfaces. + +In the following activities, we want to understand how the above transmission filter responds as a function of the specified parameters. Choose the following parameters: + +$$R_1 = R_2 = 0.8$$ + +$$0 \le \omega \le 4\omega_0$$ + +a. Plot $T$ vs. $\omega/\omega_0$ for the above frequency range. + +b. At what frequencies does the transmission reach a maximum? A minimum? + +c. Devise two methods by which you can tune the filter so that the maximum of the filter transmission is centered around a particular physical frequency. + +d. How sharp is the filter? By sharp, we mean: what is the width of the transmission band that allows through at least 50% of the incident light? Define the width relative to $\omega_0$. + +e. Answer question (d) with the values of the reflection coatings given now by: + +$$R_1 = R_2 = 0.9$$ + +$$0 \le \omega \le 4\omega_0$$ + +Does the sharpness of the filter increase or decrease with an increase of the reflection coefficients of the coating surfaces for the two mirrors? + +f. Choosing $\omega = \omega_0$, plot a 3-D mesh of $T$ as a function of the reflection coefficients $R_1$ and $R_2$. Show, both graphically and numerically, that the best performance occurs when the reflection coatings are the same. + +g. Plot the contrast function defined as $C = \frac{T_{\min}}{T_{\max}}$ as a function of the reflection coefficients $R_1$ and $R_2$. How should you choose your mirrors for maximum contrast? \ No newline at end of file diff --git a/samples/texts/348597/page_157.md b/samples/texts/348597/page_157.md new file mode 100644 index 0000000000000000000000000000000000000000..96a776060c527e123999d2e4308ba2de63f260a2 --- /dev/null +++ b/samples/texts/348597/page_157.md @@ -0,0 +1,21 @@ +h. For $\omega = \omega_0$, plot the variation of the transmission coefficient as function of $\theta_i$. + +i. Repeat (h), but now investigate the variation in the transmission coefficient as a function of L. + +## 5.4 MATLAB Commands Review + +**besselj** The built-in BesselJ function. + +**fmin** Finds the minimum value of a single variable function or a restricted domain. + +**fmins** Finds the local minimum of a multivariable function. + +**fsolve** Finds a root to a system of nonlinear equations assuming an initial guess. + +**fzero** Finds the zero of a single variable function assuming an initial guess. + +**roots** Finds the roots of a polynomial if the polynomial coefficients are given. + +**poly** Assembles a polynomial from its roots. + +**zoom** Zooms in and out on a 2-D plot. \ No newline at end of file diff --git a/samples/texts/348597/page_158.md b/samples/texts/348597/page_158.md new file mode 100644 index 0000000000000000000000000000000000000000..b6fc90a1b1caeb3203ba28f20346754f571616f4 --- /dev/null +++ b/samples/texts/348597/page_158.md @@ -0,0 +1,21 @@ +# 6 + +## Complex Numbers + +### 6.1 Introduction + +Since $x^2 > 0$ for all real numbers $x$, the equation $x^2 = -1$ admits no real number as a solution. To deal with this problem, mathematicians in the 18th century introduced the imaginary number $i = \sqrt{-1} = j$. (So as not to confuse the usual symbol for a current with this quantity, electrical engineers prefer the use of the $j$ symbol. MATLAB accepts either symbol, but always gives the answer with the symbol $i$). + +Expressions of the form: + +$$z = a + jb \tag{6.1}$$ + +where *a* and *b* are real numbers called complex numbers. As illustrated in Section 6.2, this representation has properties similar to that of an ordered pair (*a*, *b*), which is represented by a point in the 2-D plane. + +The real number *a* is called the real part of *z*, and the real number *b* is called the imaginary part of *z*. These numbers are referred to by the symbols *a* = Re(z) and *b* = Im(z). + +When complex numbers are represented geometrically in the x-y coordinate system, the x-axis is called the real axis, the y-axis is called the imaginary axis, and the plane is called the complex plane. + +### 6.2 The Basics + +In this section, you will learn how, using MATLAB, you can represent a complex number in the complex plane. It also shows how the addition (or subtraction) of two complex numbers, or the multiplication of a complex number by a real number or by *j*, can be interpreted geometrically. \ No newline at end of file diff --git a/samples/texts/348597/page_159.md b/samples/texts/348597/page_159.md new file mode 100644 index 0000000000000000000000000000000000000000..55b4756c597ad03141fefdc4a48f3a0ea5f7b709 --- /dev/null +++ b/samples/texts/348597/page_159.md @@ -0,0 +1,39 @@ +**Example 6.1** + +Plot in the complex plane, the three points ($P_1, P_2, P_3$) representing the complex numbers: $z_1 = 1$, $z_2 = j$, $z_3 = -1$. + +**Solution:** Enter and execute the following commands in the command window: + +```matlab +z1=1; +z2=j; +z3=-1; +plot(z1,'*') +axis([-2 2 -2 2]) +axis('square') +hold on +plot(z2,'o') +plot(z3,'*') +hold off +``` + +that is, a complex number in the **plot** command is interpreted by MATLAB to mean: take the real part of the complex number to be the *x*-coordinate and the imaginary part of the complex number to be the *y*-coordinate. + +### 6.2.1 Addition + +Next, we define addition for complex numbers. The rule can be directly deduced from analogy of addition of two vectors in a plane: the *x*-component of the sum of two vectors is the sum of the *x*-components of each of the vectors, and similarly for the *y*-component. Therefore: + +If: +$$z_1 = a_1 + jb_1 \quad (6.2)$$ + +and +$$z_2 = a_2 + jb_2 \quad (6.3)$$ + +Then: +$$z_1 + z_2 = (a_1 + a_2) + j(b_1 + b_2) \quad (6.4)$$ + +The addition or subtraction rules for complex numbers are geometrically translated through the parallelogram rules for the addition and subtraction of vectors. + +**Example 6.2** + +Find the sum and difference of the complex numbers \ No newline at end of file diff --git a/samples/texts/348597/page_16.md b/samples/texts/348597/page_16.md new file mode 100644 index 0000000000000000000000000000000000000000..73d967ff4fc7edd752ba9624218cd20482be7064 --- /dev/null +++ b/samples/texts/348597/page_16.md @@ -0,0 +1,31 @@ +all relevant functions pertaining to that area. Now you may type **help** for any function listed. For example, try **help fmin**. + +## 1.2 Basic Algebraic Operations and Functions + +The MATLAB environment can be used, on the most elementary level, as a tool to perform simple algebraic manipulations and function evaluations. + +### Example 1.1 + +Exploring the calculator functions of MATLAB. The purpose of this example is to show how to manually enter data and how to use basic MATLAB algebraic operations. Note that the statements will be executed immediately after they are typed and entered (no equal sign is required). + +Type and enter the text that follows the >> prompt to find out the MATLAB responses to the following: + +2+2 + +5^2 + +2*sin(pi/4) + +The last command gave the sine of π/4. Note that the argument of the function was enclosed in parentheses directly following the name of the function. Therefore, if you wanted to find sin³(π/4), the proper MATLAB syntax would be + +sin(pi/4)^3 + +To facilitate its widespread use, MATLAB has all the standard elementary mathematical functions as built-in functions. Type **help elfun**, which is indexed in the main help menu to get a listing of some of these functions. Remember that this is just a small sampling of the available functions. + +help elfun + +The response to the last command will give you a large list of these elementary functions, some of which may be new to you, but all of which will be used in your future engineering studies, and explored in later chapters of this book. + +### Example 1.2 + +Assigning and calling values of parameters. In addition to inputting data directly to the screen, you can assign a symbolic constant or constants to rep- \ No newline at end of file diff --git a/samples/texts/348597/page_160.md b/samples/texts/348597/page_160.md new file mode 100644 index 0000000000000000000000000000000000000000..a1999ff7a2bcf6835d69b31611232824ebe3a763 --- /dev/null +++ b/samples/texts/348597/page_160.md @@ -0,0 +1,35 @@ +$$z_1 = 1 + 2j \quad \text{and} \quad z_2 = 2 + j$$ + +**Solution:** Grouping the real and imaginary parts separately, we obtain: + +$$z_1 + z_2 = +3j$$ + +and + +$$z_1 - z_2 = -1 + j$$ + +**Preparatory Exercise** + +**Pb. 6.1** Given the complex numbers $z_1, z_2$, and $z_3$ corresponding to the vertices $P_1, P_2$, and $P_3$ of a parallelogram, find $z_4$ corresponding to the fourth vertex $P_4$. (Assume that $P_4$ and $P_2$ are opposite vertices of the parallelogram). Verify your answer graphically for the case: + +$$z_1 = 2+j, \quad z_2 = 1+2j, \quad z_3 = 4+3j$$ + +### 6.2.2 Multiplication by a Real or Imaginary Number + +If we multiply the complex number $z = a + jb$ by a real number $k$, the resultant complex number is given by: + +$$k \times z = k \times (a + jb) = ka + jkb \qquad (6.5)$$ + +What happens when we multiply by $j$? + +Let us, for a moment, return to Example 6.1. We note the following properties for the three points $P_1, P_2$, and $P_3$: + +1. The three points are equally distant from the origin of the axis. + +2. The point $P_2$ is obtained from the point $P_1$ by a $\pi/2$ counter-clockwise rotation. + +3. The point $P_3$ is obtained from the point $P_2$ through another $\pi/2$ counterclockwise rotation. + +We also note, by examining the algebraic forms of $z_1, z_2, z_3$ that: + +$$z_2 = jz_1 \quad \text{and} \quad z_3 = jz_2 = j^2z_1 = -z_1$$ \ No newline at end of file diff --git a/samples/texts/348597/page_161.md b/samples/texts/348597/page_161.md new file mode 100644 index 0000000000000000000000000000000000000000..5b9b4f71768cd276b0445ebb9d5fde1b5b7cda11 --- /dev/null +++ b/samples/texts/348597/page_161.md @@ -0,0 +1,48 @@ +That is, multiplying by $j$ is geometrically equivalent to a counterclockwise rotation by an angle of $\pi/2$. + +### 6.2.3 Multiplication of Two Complex Numbers + +The multiplication of two complex numbers follows the same rules of algebra for real numbers, but considers $j^2 = -1$. This yields: + +$$z_1 = a_1 + jb_1 \quad \text{and} \quad z_2 = a_2 + jb_2$$ + +If: + +$$\Rightarrow z_1z_2 = (a_1a_2 - b_1b_2) + j(a_1b_2 + b_1a_2) \quad (6.6)$$ + +#### Preparatory Exercises + +Solve the following problems analytically. + +**Pb. 6.2** Find $z_1z_2, z_1^2, z_2^2$ for the following pairs: + +$$\begin{align*} +\text{a.} \quad & z_1 = 3j; \quad z_2 = 1-j \\ +\text{b.} \quad & z_1 = 4+6j; \quad z_2 = 2-3j \\ +\text{c.} \quad & z_1 = \frac{1}{3}(2+4j); \quad z_2 = \frac{1}{2}(1-5j) \\ +\text{d.} \quad & z_1 = \frac{1}{3}(2-4j); \quad z_2 = \frac{1}{2}(1+5j) +\end{align*}$$ + +**Pb. 6.3** Find the real quantities $m$ and $n$ in each of the following equations: + +$$\begin{align*} +\text{a.} \quad & mj + n(1 + j) = 3 - 2j \\ +\text{b.} \quad & m(2 + 3j) + n(1 - 4j) = 7 + 5j +\end{align*}$$ + +*(Hint: Two complex numbers are equal if separately the real and imaginary parts are equal.)* + +**Pb. 6.4** Write the answers in standard form: (i.e., $a + jb$) + +$$\begin{align*} +\text{a.} \quad &(3 - 2j)^2 - (3 + 2j)^2 \\ +\text{b.} \quad &(7 + 14j)^7 \\ +\text{c.} \quad &\left[(2 + j)\left(\frac{1}{2} + 2j\right)\right]^2 \\ +\text{d.} \quad &j(1 + 7j) - 3j(4 + 2j) +\end{align*}$$ + +**Pb. 6.5** Show that for all complex numbers $z_1, z_2, z_3$, we have the following properties: + +$$z_1z_2 = z_2z_1 \quad (\text{commutativity property})$$ + +$$z_1(z_2 + z_3) = z_1z_2 + z_1z_3 \quad (\text{distributivity property})$$ \ No newline at end of file diff --git a/samples/texts/348597/page_162.md b/samples/texts/348597/page_162.md new file mode 100644 index 0000000000000000000000000000000000000000..65f2798e806f8936dc017564cb8a2ea3a9788f54 --- /dev/null +++ b/samples/texts/348597/page_162.md @@ -0,0 +1,12 @@ +FIGURE 6.1 +The center of mass of a triangle. (Refer to Pb. 6.6). + +**Pb. 6.6** Consider the triangle $\Delta(ABC)$, in which $D$ is the midpoint of the BC segment, and let the point $G$ be defined such that $(GD) = \frac{1}{3}(AD)$. Assuming that $z_A, z_B, z_C$ are the complex numbers representing the points $(A, B, C)$: + +a. Find the complex number $z_G$ that represents the point $G$. + +b. Show that $(CG) = \frac{2}{3}(CF)$ and that $F$ is the midpoint of the segment ($AB$). + +## 6.3 Complex Conjugation and Division + +**DEFINITION** The complex conjugate of a complex number $z$, which is denoted by $\bar{z}$, is given by: \ No newline at end of file diff --git a/samples/texts/348597/page_163.md b/samples/texts/348597/page_163.md new file mode 100644 index 0000000000000000000000000000000000000000..24e521e0f1cc0a4bfb64bdacd4985c08744be5fe --- /dev/null +++ b/samples/texts/348597/page_163.md @@ -0,0 +1,48 @@ +$$ +\bar{z} = a - jb \quad \text{if} \quad z = a + jb \tag{6.7} +$$ + +That is, $\bar{z}$ is obtained from $z$ by reversing the sign of $\operatorname{Im}(z)$. Geometrically, $z$ +and $\bar{z}$ form a pair of symmetric points with respect to the real axis (x-axis) in +the complex plane. + +In MATLAB, complex conjugation is written as **conj(z)**. + +**DEFINITION** The modulus of a complex number $z = a + jb$, denoted by $|z|$, is +given by: + +$$ +|z| = \sqrt{a^2 + b^2} \tag{6.8} +$$ + +Geometrically, it represents the distance between the origin and the point +representing the complex number $z$ in the complex plane, which by +Pythagorean theorem is given by the same quantity. + +In MATLAB, the modulus of z is denoted by abs(z). + +**THEOREM** + +For any complex number $z$, we have the result that: + +$$ +|z|^2 = \bar{z}z \tag{6.9} +$$ + +PROOF Using the above two definitions for the complex conjugate and the norm, we can write: + +$$ +\bar{z}z = (a-jb)(a+jb) = a^2+b^2 = |z|^2 +$$ + +**In-Class Exercise** + +Solve the problem analytically, and then use MATLAB to verify your answers. + +Pb. 6.7 Let $z = 3 + 4j$. Find $|z|$, $\bar{z}$, and $z\bar{z}$. Verify the above theorem. + +6.3.1 Division + +Using the above definitions and theorem, we now want to define the inverse +of a complex number with respect to the multiplication operation. We write +the results in standard form. \ No newline at end of file diff --git a/samples/texts/348597/page_164.md b/samples/texts/348597/page_164.md new file mode 100644 index 0000000000000000000000000000000000000000..349ae977da4f2edde466386287a3ac0d0f835b20 --- /dev/null +++ b/samples/texts/348597/page_164.md @@ -0,0 +1,37 @@ +$$z^{-1} = \frac{1}{z} = \frac{1}{(a+jb)} \left( \frac{a-jb}{a-jb} \right) = \frac{a-jb}{a^2+b^2} = \frac{\bar{z}}{|z|^2} \quad (6.10)$$ + +from which we deduce that: + +$$\operatorname{Re}\left(\frac{1}{z}\right) = \frac{\operatorname{Re}(z)}{[|\operatorname{Re}(z)|]^2 + [|\operatorname{Im}(z)|]^2} \qquad (6.11)$$ + +and + +$$\operatorname{Im}\left(\frac{1}{z}\right) = \frac{-\operatorname{Im}(z)}{[|\operatorname{Re}(z)|]^2 + [|\operatorname{Im}(z)|]^2} \qquad (6.12)$$ + +To summarize the above results, and to help you build your syntax for the quantities defined in this section, edit the following *script M-file* and execute it: + +* `z=3+4*j` +* `zbar=conj(z)` +* `modulz=abs(z)` +* `modul2z=z*conj(z)` +* `invz=1/z` +* `reinvz=real(1/z)` +* `iminvz=imag(1/z)` + +## In-Class Exercises + +**Pb. 6.8** Analytically and numerically, obtain in the standard form an expression for each of the following quantities: + +$$\text{a. } \frac{3+4j}{2+5j} \quad \text{b. } \frac{\sqrt{3}+j}{(1-j)(3+j)} \quad \text{c. } \left[ \frac{1-2j}{2+3j} - \frac{3+j}{2j} \right]$$ + +**Pb. 6.9** For any pair of complex numbers $z_1$ and $z_2$, show that: + +$$\overline{z_1 + z_2} = \bar{z}_1 + \bar{z}_2$$ + +$$\overline{z_1 - z_2} = \bar{z}_1 - \bar{z}_2$$ + +$$\overline{z_1 z_2} = \bar{z}_1 \bar{z}_2$$ + +$$\overline{(z_1 / z_2)} = \bar{z}_1 / \bar{z}_2$$ + +$$\bar{z} = z$$ \ No newline at end of file diff --git a/samples/texts/348597/page_165.md b/samples/texts/348597/page_165.md new file mode 100644 index 0000000000000000000000000000000000000000..b1443197635362a9aa7c162179084a351824585e --- /dev/null +++ b/samples/texts/348597/page_165.md @@ -0,0 +1,31 @@ +## 6.4 Polar Form of Complex Numbers + +If we use polar coordinates, we can write the real and imaginary parts of a complex number $z = a + jb$ in terms of the modulus of z and the polar angle $\theta$: + +$$a = r \cos(\theta) = |z| \cos(\theta) \qquad (6.13)$$ + +$$b = r \sin(\theta) = |z| \sin(\theta) \qquad (6.14)$$ + +and the complex number z can then be written in polar form as: + +$$z = |z| \cos(\theta) + j|z| \sin(\theta) = |z|(\cos(\theta) + j \sin(\theta)) \qquad (6.15)$$ + +The angle $\theta$ is called the argument of z and is usually evaluated in the interval $-π ≤ θ ≤ π$. However, we still have the same complex number if we added to the value of $\theta$ an integer multiple of $2π$. + +$$\theta = \arg(z)$$ + +$$\tan(\theta) = \frac{b}{a} \qquad (6.16)$$ + +From the above results, it is obvious that the argument of the complex conjugate of a complex number is equal to minus the argument of this complex number. + +In MATLAB, the convention for $\arg(z)$ is `angle(z)`. + +### In-Class Exercise + +**Pb. 6.10** Find the modulus and argument for each of the following complex numbers: + +$$z_1 = 1+2j; \quad z_2 = 2+j; \quad z_3 = 1-2j; \quad z_4 = -1+2j; \quad z_5 = -1-2j$$ + +Plot these points. Can you detect any geometrical pattern? Generalize. + +The main advantage of writing complex numbers in polar form is that it makes the multiplication and division operations more transparent, and provides a simple geometric interpretation to these operations, as shown below. \ No newline at end of file diff --git a/samples/texts/348597/page_166.md b/samples/texts/348597/page_166.md new file mode 100644 index 0000000000000000000000000000000000000000..896e2732249b4400e4a3cdc5fc01a6221f57950f --- /dev/null +++ b/samples/texts/348597/page_166.md @@ -0,0 +1,35 @@ +### 6.4.1 New Insights into Multiplication and Division of Complex Numbers + +Consider the two complex numbers $z_1$ and $z_2$ written in polar form: + +$$z_1 = |z_1|( \cos(\theta_1) + j \sin(\theta_1)) \qquad (6.17)$$ + +$$z_2 = |z_2|( \cos(\theta_2) + j \sin(\theta_2)) \qquad (6.18)$$ + +Their product $z_1 z_2$ is given by: + +$$z_1 z_2 = |z_1||z_2| \left[ (\cos(\theta_1)\cos(\theta_2) - \sin(\theta_1)\sin(\theta_2)) + j(\sin(\theta_1)\cos(\theta_2) + \cos(\theta_1)\sin(\theta_2)) \right] \qquad (6.19)$$ + +But using the trigonometric identities for the sine and cosine of the sum of two angles: + +$$\cos(\theta_1 + \theta_2) = \cos(\theta_1)\cos(\theta_2) - \sin(\theta_1)\sin(\theta_2) \qquad (6.20)$$ + +$$\sin(\theta_1 + \theta_2) = \sin(\theta_1)\cos(\theta_2) + \cos(\theta_1)\sin(\theta_2) \qquad (6.21)$$ + +the product of two complex numbers can then be written in the simpler form: + +$$z_1 z_2 = |z_1||z_2| [\cos(\theta_1 + \theta_2) + j \sin(\theta_1 + \theta_2)] \qquad (6.22)$$ + +That is, when multiplying two complex numbers, the modulus of the product is the product of the moduli, while the argument is the sum of arguments: + +$$|z_1 z_2| = |z_1||z_2| \qquad (6.23)$$ + +$$\arg(z_1 z_2) = \arg(z_1) + \arg(z_2) \qquad (6.24)$$ + +The above result can be generalized to the product of *n* complex numbers and the result is: + +$$|z_1 z_2 \dots z_n| = |z_1||z_2|\dots|z_n| \qquad (6.25)$$ + +$$\arg(z_1 z_2 \dots z_n) = \arg(z_1) + \arg(z_2) + \dots + \arg(z_n) \qquad (6.26)$$ + +A particular form of this expression is the De Moivre theorem, which states that: \ No newline at end of file diff --git a/samples/texts/348597/page_167.md b/samples/texts/348597/page_167.md new file mode 100644 index 0000000000000000000000000000000000000000..47ed9298f57fa3b6411392ff5ebe6da04cc433f5 --- /dev/null +++ b/samples/texts/348597/page_167.md @@ -0,0 +1,35 @@ +$$ +(\cos(\theta) + j \sin(\theta))^n = \cos(n\theta) + j \sin(n\theta) \quad (6.27) +$$ + +The above results suggest that the polar form of a complex number may be +written as a function of an exponential function because of the additivity of +the arguments upon multiplication. We revisit this issue later. + +In-Class Exercises + +**Pb. 6.11** Show that $\frac{z_1}{z_2} = \frac{|z_1|}{|z_2|}[\cos(\theta_1 - \theta_2) + j\sin(\theta_1 - \theta_2)]$. + +**Pb. 6.12** Explain, using the above results, why multiplication of any com- +plex number by $j$ is equivalent to a rotation of the point representing this +number in the complex plane by $\pi/2$. + +**Pb. 6.13** By what angle must we rotate the point P(3, 4) to transform it to the point P'(4, 3)? + +**Pb. 6.14** The points $z_1 = 1 + 2j$ and $z_2 = 2 + j$ are adjacent vertices of a regular hexagon. Find the vertex $z_3$ that is also a vertex of the same hexagon and that is adjacent to $z_2$ ($z_3 \neq z_1$). + +**Pb. 6.15** Show that the points A, B, C representing the complex numbers $z_A$, $z_B$, $z_C$ in the complex plane lie on the same straight line if and only if: + +$$ +\frac{z_A - z_C}{z_B - z_C} \text{ is real.} +$$ + +**Pb. 6.16** Determine the coordinates of the *P'* point obtained from the point *P*(2, 4) through a reflection around the line $y = \frac{x}{2} + 2$. + +**Pb. 6.17** Consider two points A and B representing, in the complex plane, the complex numbers $z_1$ and $1/z_1$. Let P be any point on the circle of radius 1 and centered at the origin (the unit circle). Show that the ratio of the length of the line segments PA and PB is the same, regardless of the position of point P on the unit circle. + +**Pb. 6.18** Find the polar form of each of the following quantities: + +$$ +\frac{(1+j)^{15}}{(1-j)^9}, \quad \sqrt{-(-1+j)(j+2)}, \quad (1+j+j^2+j^3)^{99} +$$ \ No newline at end of file diff --git a/samples/texts/348597/page_168.md b/samples/texts/348597/page_168.md new file mode 100644 index 0000000000000000000000000000000000000000..71d4a58439159905f26a8c8899045462b74bc2ff --- /dev/null +++ b/samples/texts/348597/page_168.md @@ -0,0 +1,39 @@ +### 6.4.2 Roots of Complex Numbers + +Given the value of the complex number $z$, we are interested here in finding +the solutions of the equation: + +$$v^n = z \qquad (6.28)$$ + +Let us write both the solutions and $z$ in polar forms, + +$$v = \rho(\cos(\alpha) + j\sin(\alpha)) \qquad (6.29)$$ + +$$z = r(\cos(\theta) + j\sin(\theta)) \qquad (6.30)$$ + +From the De Moivre theorem, the expression for $v^n = z$ can be written as: + +$$\rho^n (\cos(n\alpha) + j\sin(n\alpha)) = r(\cos(\theta) + j\sin(\theta)) \qquad (6.31)$$ + +Comparing the moduli of both sides, we deduce by inspection that: + +$$\rho = n\sqrt{r} \qquad (6.32)$$ + +The treatment of the argument should be done with great care. Recalling +that two angles have the same cosine and sine if they are equal or differ from +each other by an integer multiple of $2\pi$, we can then deduce that: + +$$n\alpha = \theta + 2k\pi, \quad k = 0, \pm 1, \pm 2, \pm 3, \dots \qquad (6.33)$$ + +Therefore, the general expression for the roots is: + +$$z^{1/n} = r^{1/n} \left( \cos\left(\frac{\theta}{n} + \frac{2k\pi}{n}\right) + j \sin\left(\frac{\theta}{n} + \frac{2k\pi}{n}\right) \right) \qquad (6.34)$$ + +with $k = 0, 1, 2, \dots, (n-1)$ + +Note that the roots reproduce themselves outside the range: $k = 0, 1, 2, \dots,$ +$(n - 1)$. + +## In-Class Exercises + +**Pb. 6.19** Calculate the roots of the equation $z^5 - 32 = 0$, and plot them in the complex plane. \ No newline at end of file diff --git a/samples/texts/348597/page_169.md b/samples/texts/348597/page_169.md new file mode 100644 index 0000000000000000000000000000000000000000..d2beb60060c45028768b74caab6d4c54c135dad7 --- /dev/null +++ b/samples/texts/348597/page_169.md @@ -0,0 +1,33 @@ +a. What geometric shape does the polygon with the solutions as vertices form? + +b. What is the sum of these roots? (Derive your answer both algebraically and geometrically.) + +**6.4.3 The Function y = e** + +As alluded to previously, the expression cos(θ) + j sin(θ) behaves very much as if it was an exponential; because of the additivity of the arguments of each term in the argument of the product, we denote this quantity by: + +$$e^{j\theta} = \cos(\theta) + j\sin(\theta) \qquad (6.35)$$ + +PROOF Compute the Taylor expansion for both sides of the above equation. The series expansion for $e^{j\theta}$ is obtained by evaluating Taylor's formula at $x = j\theta$, giving (see appendix): + +$$e^{j\theta} = \sum_{n=0}^{\infty} \frac{1}{n!} (j\theta)^n \qquad (6.36)$$ + +When this series expansion for $e^{j\theta}$ is written in terms of its even part and odd part, we have the result: + +$$e^{j\theta} = \sum_{m=0}^{\infty} \frac{1}{(2m)!} (j\theta)^{2m} + \sum_{m=0}^{\infty} \frac{1}{(2m+1)!} (j\theta)^{2m+1} \qquad (6.37)$$ + +However, since $j^2 = -1$, this last equation can also be written as: + +$$e^{j\theta} = \sum_{m=0}^{\infty} \frac{(-1)^m}{(2m)!} (\theta)^{2m} + j \sum_{m=0}^{\infty} \frac{(-1)^m}{(2m+1)!} (\theta)^{2m+1} \qquad (6.38)$$ + +which, by inspection, can be verified to be the sum of the Taylor expansions for the cosine and sine functions. + +In this notation, the product of two complex numbers $z_1$ and $z_2$ is: $r_1 r_2 e^{j(\theta_1+\theta_2)}$. It is then a simple matter to show that: + +If: + +$$z = r \exp(j\theta) \qquad (6.39)$$ + +Then: + +$$\bar{z} = r \exp(-j\theta) \qquad (6.40)$$ \ No newline at end of file diff --git a/samples/texts/348597/page_17.md b/samples/texts/348597/page_17.md new file mode 100644 index 0000000000000000000000000000000000000000..3cb3c525f10d1cef065c7b4bda7e2522473858a6 --- /dev/null +++ b/samples/texts/348597/page_17.md @@ -0,0 +1,52 @@ +resent data and perform manipulations on them. For example, enter and note +the answer to each of the following: + +a=2 + +b=3 + +c=a+b + +d=a*b + +e=a/b + +f=a^3/b^2 + +g=a+3*b^2 + +*Question:* From the above, can you deduce the order in which MATLAB performs the basic operations? + +**In-Class Exercise** + +Pb. 1.1 Using the above values of *a* and *b*, find the values of: + +$$ +\text{a. } h = \sin(a) \sin(b) +$$ + +$$ +\mathbf{b.} \quad i = a^{1/3} b^{3/7} +$$ + +$$ +c. j = \sin^{-1}(a/b) = \arcsin(a/b) +$$ + +1.3 Plotting Points + +In this chapter section, you will learn how to use some simple MATLAB +graphics commands to plot points. We use these graphics commands later in +the text for plotting functions and for visualizing their properties. To view all +the functions connected with 2-dimensional graphics, type: + +help plot + +All graphics functions connected with 3-dimensional graphics can be looked +up by typing + +help plot3 + +A point P in the x-y plane is specified by two coordinates. The x-coordinate +measures the horizontal distance of the point from the y-axis, while the +y-coordinate measures the vertical distance above the x-axis. These coordi- \ No newline at end of file diff --git a/samples/texts/348597/page_170.md b/samples/texts/348597/page_170.md new file mode 100644 index 0000000000000000000000000000000000000000..efb577ed844b84842c9ce8773152b8e3243fd052 --- /dev/null +++ b/samples/texts/348597/page_170.md @@ -0,0 +1,44 @@ +and + +$$ +z^{-1} = \frac{1}{r} \exp(-j\theta) \tag*{(6.41)} +$$ + +from which we can deduce Euler’s equations: + +$$ +\cos(\theta) = \frac{\exp(j\theta) + \exp(-j\theta)}{2} \tag{6.42} +$$ + +and + +$$ +\sin(\theta) = \frac{\exp(j\theta) - \exp(-j\theta)}{2j} \quad (6.43) +$$ + +**Example 6.3** + +Use MATLAB to generate the graph of the unit circle in the complex plane. + +*Solution:* Because all points on the unit circle are equidistant from the origin and their distance to the origin (their modulus) is equal to 1, we can generate the circle by plotting the *N*-roots of unity, taking a very large value for *N*. This can be implemented by executing the following *script M-file*. + +N=720; +z=exp(j*2*pi*[1:N]./N); +plot(z) +axis square + +In-Class Exercises + +**Pb. 6.20** Using the exponential form of the *n*-roots of unity, and the expression for the sum of a geometric series (given in the appendix), show that the sum of these roots is zero. + +**Pb. 6.21** Compute the following sums: + +a. $1 + \cos(x) + \cos(2x) + ... + \cos(nx)$ + +b. $\sin(x) + \sin(2x) + ... + \sin(nx)$ + +c. $\cos(\alpha) + \cos(\alpha + \beta) + ... + \cos(\alpha + n\beta)$ + +d. $\sin(\alpha) + \sin(\alpha + \beta) + ... + \sin(\alpha + n\beta)$ + +**Pb. 6.22** Verify numerically that for $z = x + jy$: \ No newline at end of file diff --git a/samples/texts/348597/page_171.md b/samples/texts/348597/page_171.md new file mode 100644 index 0000000000000000000000000000000000000000..7aa64de53d318e9360c62459edc1cc96b2b61210 --- /dev/null +++ b/samples/texts/348597/page_171.md @@ -0,0 +1,47 @@ +$$ \lim_{n \to \infty} \left(1 + \frac{z}{n}\right)^n = \exp(x)(\cos(y) + j \sin(y)) $$ + +For what values of y is this quantity pure imaginary? + +## Homework Problems + +**Pb. 6.23** Plot the curves determined by the following parametric representations: + +a. $z = 1 - jt$ $0 \le t \le 2$ + +b. $z = t + jt^2$ $-\infty < t < \infty$ + +c. $z = 2(\cos(t) + j \sin(t))$ $\frac{\pi}{2} < t < \frac{3\pi}{2}$ + +d. $z = 3(t + j - j \exp(-jt))$ $0 < t < \infty$ + +**Pb. 6.24** Find the expression $y = f(x)$ and plot the families of curves defined by each of the corresponding equations: + +a. $\operatorname{Re}(\frac{1}{z}) = 2$ + +b. $\operatorname{Im}(\frac{1}{z}) = 2$ + +c. $\operatorname{Re}(z^2) = 4$ + +d. $\operatorname{Im}(z^2) = 4$ + +e. $\left|\frac{z-3}{z+3}\right| = 5$ + +f. $\arg\left(\frac{z-3}{z+3}\right) = \frac{\pi}{4}$ + +g. $|z^2 - 1| = 3$ + +h. $|z| = \operatorname{Im}(z) + 4$ + +**Pb. 6.25** Find the image of the line $\operatorname{Re}(z) = 1$ upon the transformation $z' = z^2 + z$. (First obtain the result analytically, and then verify it graphically.) + +**Pb. 6.26** Consider the following bilinear transformation: $z' = \frac{az+b}{cz+d}$ + +Show how with proper choices of the constants *a*, *b*, *c*, *d*, we can generate all transformations of planar geometry (i.e., scaling, rotation, translation, and inversion). + +**Pb. 6.27** Plot the curves $C'$ generated by the points $P'$ that are the images of points on the circle centered at (3, 4) and of radius 5 under the transformation of the preceding problem, with the following parameters: + +Case 1: $a = \exp(j\pi/4)$, $b = 0$, $c = 0$, $d = 1$ + +Case 2: $a = 1$, $b = 3$, $c = 0$, $d = 1$ + +Case 3: $a = 0$, $b = 1$, $c = 1$, $d = 0$ \ No newline at end of file diff --git a/samples/texts/348597/page_172.md b/samples/texts/348597/page_172.md new file mode 100644 index 0000000000000000000000000000000000000000..330a7ee5b9969d5f45302f79996a9e1c75998d97 --- /dev/null +++ b/samples/texts/348597/page_172.md @@ -0,0 +1,25 @@ +## 6.5 Analytical Solutions of Constant Coefficients ODE + +Finding the solutions of an ODE with constant coefficients is conceptually very similar to solving the linear difference equation with constant coefficients. We repeat the exercise here for its pedagogical benefits and to bring out some of the finer technical details peculiar to the ODEs of particular interest for later discussions. + +The linear differential equation of interest is given by: + +$$a_n \frac{d^n y}{dt^n} + a_{n-1} \frac{d^{n-1} y}{dt^{n-1}} + \dots + a_1 \frac{dy}{dt} + a_0 y = u(t) \quad (6.44)$$ + +In this section, we find the solutions of this ODE for the cases that $u(t) = 0$ and $u(t) = A \cos(\omega t)$. + +The solutions for the first case are referred to as the homogeneous solutions. By substitution, it is a trivial matter to verify that if $y_1(t)$ and $y_2(t)$ are solutions, then $c_1y_1(t) + c_2y_2(t)$, where $c_1$ and $c_2$ are constants, is also a solution. This is, as previously mentioned, referred to as the superposition principle for linear systems. + +If $u(t) \neq 0$, the general solution of the ODE will be the sum of the corresponding homogeneous solution and the particular solution peculiar to the specific details of $u(t)$. Furthermore, by inspection, it is clear that if the source can be decomposed into many components, then the particular solution can be written as the sum of the particular solutions for the different components and with the same weights as in the source. This property characterizes a linear system. + +**DEFINITION** A system L is considered linear if: + +$$L(c_1u_1(t) + c_2u_2(t) + \dots + c_nu_n(t)) = c_1L(u_1(t)) + c_2L(u_2(t)) + \dots + c_nL(u_n(t)) \quad (6.45)$$ + +where the c's are constants and the u's are time-dependent source signals. + +### 6.5.1 Transient Solutions + +To obtain the homogeneous solutions, we set $u(t) = 0$. We guess that the solution to this homogeneous differential equation is $y = \exp(st)$. You may wonder why we made this guess; the secret is in the property of the exponential function, whose derivative is proportional to the function itself. That is: + +$$\frac{d(\exp(st))}{dt} = s \exp(st) \quad (6.46)$$ \ No newline at end of file diff --git a/samples/texts/348597/page_173.md b/samples/texts/348597/page_173.md new file mode 100644 index 0000000000000000000000000000000000000000..be95c9a405848ca36fecae23e3c6189e53a4a437 --- /dev/null +++ b/samples/texts/348597/page_173.md @@ -0,0 +1,51 @@ +Through this substitution, the above ODE reduces to an algebraic equation, +and the solution of this algebraic equation then reduces to finding the roots +of the polynomial: + +$$ +a_n s^n + a_{n-1} s^{n-1} + \dots + a_1 s + a_0 = 0 \tag{6.47} +$$ + +We learned in Chapter 5 the MATLAB command for finding these roots, +when needed. Now, using the superposition principle, and assuming all the +roots are distinct, the general solution of the homogeneous differential equa- +tion is given by: + +$$ +y_{\text{homog.}} = c_1 \exp(s_1 t) + c_2 \exp(s_2 t) + \dots + c_n \exp(s_n t) \quad (6.48) +$$ + +where $s_1, s_2, ..., s_n$ are the above roots and $c_1, c_2, ..., c_n$ are constant determined +from the initial conditions of the solution and all its derivatives to order $n-1$. + +NOTE In the case that two or more of the roots are equal, it is easy to verify that the solution of the homogeneous ODE includes, instead of a constant multiplied by the exponential term corresponding to that root, a polynomial multiplying the exponential function. The degree of this polynomial is (m - 1) if m is the degeneracy of the root in question. + +**Example 6.4** + +Find the transient solutions to the second-order differential equation. + +$$ +a \frac{d^2 y}{dt^2} + b \frac{dy}{dt} + cy = 0 \tag{6.49} +$$ + +Solution: The characteristic polynomial associated with this ODE is the sec- +ond-degree equation given by: + +$$ +as^2 + bs + c = 0 \tag{6.50} +$$ + +The roots of this equation are + +$$ +s_{\pm} = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a} +$$ + +The nature of the solutions is very dependent on the sign of the descriminant +($b^2 - 4ac$): + +* If $b^2 - 4ac > 0$, the two roots are distinct and real. Call these roots $\alpha_1$ and $\alpha_2$; the solution is then: + +$$ +y_{\text{homog.}} = c_1 \exp(\alpha_1 t) + c_2 \exp(\alpha_2 t) \quad (6.51) +$$ \ No newline at end of file diff --git a/samples/texts/348597/page_174.md b/samples/texts/348597/page_174.md new file mode 100644 index 0000000000000000000000000000000000000000..5906dfb98b7a83ea0bda5663139327489074bdcc --- /dev/null +++ b/samples/texts/348597/page_174.md @@ -0,0 +1,29 @@ +In many physical problems of interest, we desire solutions that are zero at infinity, that is, decay over a finite time. This requires that both $\alpha_1$ and $\alpha_2$ be negative; or if only one of them is negative, that the $c$ coefficient of the exponentially increasing solution be zero. This class of solutions is called the overdamped class. + +* If $b^2 - 4ac = 0$, the two roots are equal, and we call this root $\alpha_{\text{degen}}$. The solution to the differential equation is + +$$y_{\text{homog.}} = (c_1 + c_2 t) \exp(\alpha_{\text{degen.}} t) \quad (6.52)$$ + +The polynomial, multiplying the exponential function, is of degree one here because the degeneracy of the root is of degree two. This class of solutions is referred to as the critically damped class. + +* If $b^2 - 4ac < 0$, the two roots are complex conjugates of each other, and their real part is negative for physically interesting cases. If we denote these roots by $s_{\pm} = -\alpha \pm j\beta$, the solutions to the homogeneous differential equations take the form: + +$$y_{\text{homog.}} = \exp(-\alpha t)(c_1 \cos(\beta t) + c_2 \sin(\beta t)) \quad (6.53)$$ + +This class of solutions is referred to as the under-damped class. + +## In-Class Exercises + +Find and plot the transient solutions to the following homogeneous equations, using the indicated initial conditions: + +**Pb. 6.28** $a = 1, b = 3, c = 2$ $y(t=0) = 1$ $y'(t=0) = -3/2$ + +**Pb. 6.29** $a = 1, b = 2, c = 1$ $y(t=0) = 1$ $y'(t=0) = 2$ + +**Pb. 6.30** $a = 1, b = 5, c = 6$ $y(t=0) = 1$ $y'(t=0) = 0$ + +### 6.5.2 Steady-State Solutions + +In this subsection, we find the particular solutions of the ODEs when the driving force is a single-term sinusoidal. + +As pointed out previously, because of the superposition principle, it is also possible to write the steady-state solution for any combination of such inputs. This, combined with the Fourier series techniques (briefly discussed in Chapter 7), will also allow you to write the solution for any periodic function. \ No newline at end of file diff --git a/samples/texts/348597/page_175.md b/samples/texts/348597/page_175.md new file mode 100644 index 0000000000000000000000000000000000000000..07206388e8a9fd74abf27c661cf032489c23afb1 --- /dev/null +++ b/samples/texts/348597/page_175.md @@ -0,0 +1,42 @@ +We discuss in detail the particular solution for the first-order and the sec- +ond-order differential equations because these represent, as previously +shown in Section 4.7, important cases in circuit analysis. + +**Example 6.5** + +Find the particular solution to the first-order differential equation: + +$$ +a \frac{dy}{dt} + by = A \cos(\omega t) \tag{6.54} +$$ + +Solution: We guess that the particular solution of this ODE is a sinusoidal of the form: + +$$ +\begin{align*} +y_{\text{partic.}}(t) &= B \cos(\omega t - \phi) = B[\cos(\phi)\cos(\omega t) + \sin(\phi)\sin(\omega t)] \tag{6.55} \\ +&= B_c \cos(\omega t) + B_s \sin(\omega t) +\end{align*} +$$ + +Our task now is to find $B_c$ and $B_s$ that would force Eq. (6.55) to be the solution of Eq. (6.54). Therefore, we substitute this trial solution in the differential equation and require that, separately, the coefficients of $\sin(\omega t)$ and $\cos(\omega t)$ terms match on both sides of the resulting equation. These requirements are necessary for the trial solution to be valid at all times. The resulting conditions are + +$$ +B_s = \frac{a\omega}{b} B_c \qquad B_c = \frac{Ab}{a^2\omega^2 + b^2} \tag{6.56} +$$ + +from which we can also deduce the polar form of the solution, giving: + +$$ +B^2 = \frac{A^2}{a^2\omega^2 + b^2}, \quad \tan(\phi) = \frac{a\omega}{b} \tag{6.57} +$$ + +**Example 6.6** + +Find the particular solution to the second-order differential equation: + +$$ +a \frac{d^2 y}{dt^2} + b \frac{dy}{dt} + cy = A \cos(\omega t) \quad (6.58) +$$ + +Solution: Again, take the trial particular solution to be of the form: \ No newline at end of file diff --git a/samples/texts/348597/page_176.md b/samples/texts/348597/page_176.md new file mode 100644 index 0000000000000000000000000000000000000000..50a627beb052cca4fd558aa15b5755eb72a33e63 --- /dev/null +++ b/samples/texts/348597/page_176.md @@ -0,0 +1,32 @@ +$$ +\begin{align} +y_{\text{partic.}}(t) &= B \cos(\omega t - \phi) = B[\cos(\phi)\cos(\omega t) + \sin(\phi)\sin(\omega t)] \tag{6.59} \\ +&= B_c \cos(\omega t) + B_s \sin(\omega t) +\end{align} +$$ + +Repeating the same steps as in Example 6.5, we find: + +$$B_s = \frac{b\omega}{(c - a\omega^2)^2 + \omega^2 b^2} A \quad B_c = \frac{(c - a\omega^2)}{(c - a\omega^2)^2 + \omega^2 b^2} A \quad (6.60)$$ + +$$B^2 = \frac{A^2}{(c - a\omega^2)^2 + \omega^2 b^2} \qquad \tan(\phi) = \frac{b\omega}{c - a\omega^2} \tag{6.61}$$ + +### 6.5.3 Applications to Circuit Analysis + +An important application of the above forms for the particular solutions is in circuit analysis with inductors, resistors, and capacitors as elements. We describe later a more efficient analytical method (phasor representation) for solving this kind of problem; however, we believe that it is important that you also become familiar with the present technique. + +#### 6.5.3.1 RC Circuit + +Referring to the RC circuit shown in Figure 4.4, we derived the differential equation that the potential difference across the capacitor must satisfy; namely: + +$$RC \frac{dV_C}{dt} + V_C = V_0 \cos(\omega t) \quad (6.62)$$ + +This is a first-order differential equation, the particular solution of which is given in Example 6.5 if we were to identify the coefficients in the ODE as follows: $a = RC, b = 1, A = V_0$. + +#### 6.5.3.2 RLC Circuit + +Referring to the circuit, shown in Figure 4.5, the voltage across the capacitor satisfies the following ODE: + +$$LC \frac{d^2 V_C}{dt^2} + RC \frac{dV_C}{dt} + V_C = V_0 \cos(\omega t) \quad (6.63)$$ + +This equation can be identified with that given in Example 6.6 if the ODE coefficients are specified as follows: $a = LC, b = RC, c = 1, A = V_0$. \ No newline at end of file diff --git a/samples/texts/348597/page_177.md b/samples/texts/348597/page_177.md new file mode 100644 index 0000000000000000000000000000000000000000..c5ee779c9c14ffa329737526e488bf88c3b5177b --- /dev/null +++ b/samples/texts/348597/page_177.md @@ -0,0 +1,31 @@ +## In-Class Exercises + +**Pb. 6.31** This problem pertains to the RC circuit: + +a. Write the output signal $V_C$ in the amplitude-phase representation. + +b. Plot the gain response as a function of a normalized frequency that you will have to select. (The gain of a circuit is defined as the ratio of the amplitude of the output signal over the amplitude of the input signal.) + +c. Determine the phase response of the system (i.e., the relative phase of the output signal to that of the input signal as function of the frequency) also as function of the normalized frequency. + +d. Can this circuit be used as a filter (i.e., a device that lets through only a specified frequency band)? Specify the parameters of this band. + +**Pb. 6.32** This problem pertains to the RLC circuit: + +a. Write the output signal $V_C$ in the amplitude-phase representation. + +b. Defining the resonance frequency of this circuit as: $\omega_0 = \frac{1}{\sqrt{LC}}$, find at which frequency the gain is maximum, and find the width of the gain curve. + +c. Plot the gain curve and the phase curve for the following cases: + +$$ \frac{\omega_0 L}{R} = 0.1, 1, 10. $$ + +d. Can you think of a possible application for this circuit? + +**Pb. 6.33** Can you think of a mechanical analog to the RLC circuit? Identify in that case the physical parameters in the corresponding ODE. + +**Pb. 6.34** Assume that the source potential in the RLC circuit has five frequency components at $\omega, 2\omega, ..., 5\omega$ of equal amplitude. Plot the input and output potentials as a function of time over the interval $0 < \omega t < 2\pi$. Assume that $\omega = \omega_0 = \frac{1}{\sqrt{LC}}$ and $\frac{\omega_0 L}{R} = 1$. + +## 6.6 Phasors + +A technique in widespread use to compute the steady-state solutions of systems with sinusoidal input is the method of phasors. In this and the following two chapter sections, we define phasors, learn how to use them to add two or \ No newline at end of file diff --git a/samples/texts/348597/page_178.md b/samples/texts/348597/page_178.md new file mode 100644 index 0000000000000000000000000000000000000000..dbb004290ef1f45f549b682cade1de8c98993dd2 --- /dev/null +++ b/samples/texts/348597/page_178.md @@ -0,0 +1,38 @@ +more signals having the same frequency, and how to find the particular solution of an ODE with a sinusoidal driving function. + +There are two key ideas behind the phasor representation of a signal: + +1. A real, sinusoidal time-varying signal may be represented by a complex time-varying signal. + +2. This complex signal can be represented as the product of a complex number that is independent of time and a complex signal that is dependent on time. + +**Example 6.7** + +Decompose the signal $V = A \cos(\omega t + \phi)$ according to the above prescription. + +*Solution:* This signal can, using the polar representation of complex numbers, also be written as: + +$$V = A \cos(\omega t + \phi) = \operatorname{Re}[A \exp(j(\omega t + \phi))] = \operatorname{Re}[A e^{j\phi} e^{j\omega t}] \quad (6.64)$$ + +where the phasor, denoted with a tilde on top of its corresponding signal symbol, is given by: + +$$\tilde{V} = A e^{j\phi} \qquad (6.65)$$ + +(Warning: Do not mix the tilde symbol that we use here, to indicate a phasor, with the overbar that denotes complex conjugation.) + +Having achieved the above goal of separating the time-independent part of the complex number from its time-dependent part, we now learn how to manipulate these objects. A lot of insight can be immediately gained if we note that this form of the phasor is exactly in the polar form of a complex number, with clear geometric interpretation for its magnitude and phase. + +### 6.6.1 Phasor of Two Added Signals + +The sum of two signals with common frequencies but different amplitudes and phases is + +$$V_{tot.} = A_{tot.} \cos(\omega t + \phi_{tot.}) = A_1 \cos(\omega t + \phi_1) + A_2 \cos(\omega t + \phi_2) \quad (6.66)$$ + +To write the above result in phasor notation, note that the above sum can also be written as follows: + +$$ +\begin{align} +V_{tot.} &= \operatorname{Re}[A_1 \exp(j(\omega t + \phi_1)) + A_2 \exp(j(\omega t + \phi_2))] \tag{6.67} \\ +&= \operatorname{Re}[(A_1 e^{j\phi_1} + A_2 e^{j\phi_2}) e^{j\omega t}] \nonumber +\end{align} +$$ \ No newline at end of file diff --git a/samples/texts/348597/page_179.md b/samples/texts/348597/page_179.md new file mode 100644 index 0000000000000000000000000000000000000000..c4dbb45602e16bf94e57f0d844346105678b19cd --- /dev/null +++ b/samples/texts/348597/page_179.md @@ -0,0 +1,33 @@ +and where + +$$ \tilde{V}_{\text{tot.}} = A_{\text{tot.}} e^{j\phi_{\text{tot.}}} = \tilde{V}_1 + \tilde{V}_2 \quad (6.68) $$ + +**Preparatory Exercise** + +Pb. 6.35 Write the analytical expression for $A_{tot.}$ and $\phi_{tot.}$ in Eq. (6.68) as functions of the amplitudes and phases of signals 1 and 2. + +The above result can, of course, be generalized to the sum of many signals; +specifically: + +$$ +\begin{align} +V_{\text{tot.}} &= A_{\text{tot.}} \cos(\omega t + \phi_{\text{tot.}}) = \sum_{n=1}^{N} A_n \cos(\omega t + \phi_n) \tag{6.69} \\ +&= \operatorname{Re}\left[\sum_{n=1}^{N} A_n \exp(j\omega t + j\phi_n)\right] = \operatorname{Re}\left[e^{j\omega t} \sum_{n=1}^{N} A_n e^{j\phi_n}\right] +\end{align} +$$ + +and + +$$ \tilde{V}_{\text{tot.}} = \sum_{n=1}^{N} \tilde{V}_n \qquad (6.70) $$ + +$$ \Rightarrow A_{\text{tot.}} = |\tilde{V}_{\text{tot.}}| \qquad (6.71) $$ + +$$ \phi_{\text{tot.}} = \arg(\tilde{V}_{\text{tot.}}) \qquad (6.72) $$ + +That is, the resultant field can be obtained through the simple operation of +adding all the complex numbers (phasors) that represent each of the individ- +ual signals. + +**Example 6.8** + +Given ten signals, the phasor of each of the form $A_n e^{j\phi_n}$, where the amplitude and phase for each have the functional forms $A_n = \frac{1}{n}$ and $\phi_n = n^2$, write a MATLAB program to compute the resultant sum phasor. \ No newline at end of file diff --git a/samples/texts/348597/page_18.md b/samples/texts/348597/page_18.md new file mode 100644 index 0000000000000000000000000000000000000000..cd16edb31d402e10083e2dceef28bcae26295053 --- /dev/null +++ b/samples/texts/348597/page_18.md @@ -0,0 +1,48 @@ +nates are called Cartesian coordinates, and any point in the plane can be +described in this manner. We write for the point, P(x, y). + +Other representations can also be used to locate a point with respect to a +particular set of axes. For example, in the polar representation, the point is +specified by an r-coordinate that measures the distance of the point from the +origin, while the θ-coordinate measures the angle which the line passing +through the origin and this point makes with the x-axis. + +The purpose of the following two examples is to learn how to represent +points in a plane and to plot them using MATLAB. + +**Example 1.3** + +Plot the point P(3, 4). + +*Solution:* Enter the following: + +x1=3; +y1=4; +plot(x1,y1,'*') + +Note that the semicolon is used in the above commands to suppress the +echoing of the values of the inputs. The '*' is used to mark the point that we +are plotting. Other authorized symbols for point displays include 'o', '+', +'x', ... the use of which is detailed in **help plot**. + +**Example 1.4** + +Plot the second point, R(2.5, 4) on the graph while keeping point P of the previous example on the graph. + +*Solution:* If we went ahead, defined the coordinates of R, and attempted to +plot the point R through the following commands: + +x2=2.5; +y2=4; +plot(x2,y2,'o') + +we would find that the last plot command erases the previous plot output. + +Thus, what should we do if we want both points plotted on the same +graph? The answer is to use the **hold on** command after the first plot. + +The following illustrates the steps that you should have taken instead of +the above: + +hold on +x2=2.5; \ No newline at end of file diff --git a/samples/texts/348597/page_180.md b/samples/texts/348597/page_180.md new file mode 100644 index 0000000000000000000000000000000000000000..bcfd8adc8b4f3920a9be8f3a220579394014ea3d --- /dev/null +++ b/samples/texts/348597/page_180.md @@ -0,0 +1,30 @@ +**Solution:** Edit and execute the following script M-file: + +N=10; +n=1:N; +amplituden=1./n; +phasen=n.^2; +phasorn=amplituden.*exp(j.*phasen); +phasortot=sum(phasorn); +amplitudetot=abs(phasortot); +phasetot=angle(phasortot) + +In-Class Exercises + +Pb. 6.36 Could you have estimated the answer to Example 6.8? Justify your reasoning. + +Pb. 6.37 Show that if you add *N* signals with the same magnitude and frequency but with phases equally distributed over the [0, 2π] interval, the resultant phasor will be zero. (Hint: Remember the result for the sum of the roots of unity.) + +Pb. 6.38 Show that the resultant signal from adding *N* signals having the same frequency has the largest amplitude when all the individual signals are in phase (this situation is referred to as maximal constructive interference). + +Pb. 6.39 In this problem, we consider what happens if the frequency and amplitude of *N* different signals are still equal, but the different phases of the signals are randomly distributed over the [0, 2π] interval. Find the amplitude of the resultant signal if *N* = 1000, and compare it with the maximal constructive interference result. (Hint: Recall that the `rand(1,N)` command generates a 1-D array of *N* random numbers from the interval [0, 1].) + +Pb. 6.40 The service provided to your home by the electric utility company is a two-phase service. This means that two 110-V/60-Hz hot lines plus a neutral (ground) line terminate in your panel. The hot lines are π out of phase. + +a. Which signal would you use to drive your clock radio or your toaster? + +b. What configuration will you use to drive your oven or your dryer? + +Pb. 6.41 In most industrial environments, electric power is delivered in what is called a three-phase service. This consists of three 110-V/60-Hz lines with phases given by $(0, 2\pi/3, 4\pi/3)$. What is the maximum voltage that you can obtain from any combination of two of these signals? + +Pb. 6.42 Two- and three-phase power can be extended to *N*-phase power. In such a scheme, the N-110-V/60-Hz signals are given by: \ No newline at end of file diff --git a/samples/texts/348597/page_181.md b/samples/texts/348597/page_181.md new file mode 100644 index 0000000000000000000000000000000000000000..cd57ae7a7d92a6d28bd620c5ebbcdf3072abc55c --- /dev/null +++ b/samples/texts/348597/page_181.md @@ -0,0 +1,19 @@ +$$V_n = 110 \cos\left(120t + \frac{2\pi n}{N}\right) \quad \text{and} \quad n = 0, 1, \dots, N-1$$ + +While the sum of the voltage of all the lines is zero, the instantaneous power is not. Find the total power, assuming that the power from each line is proportional to the square of its time-dependent expression. (Hint: Use the double angle formula for the cosine function.) + +$$p_n(t) = A^2 \cos^2\left(\omega t + \frac{2\pi n}{N}\right) \quad \text{and} \quad P = \sum_{n=0}^{N-1} p_n$$ + +**NOTE** Another designation in use for a 110-V line is an rms value of 110, and not the value of the maximum amplitude as used above. + +## 6.7 Interference and Diffraction of Electromagnetic Waves + +### 6.7.1 The Electromagnetic Wave + +Electromagnetic waves (em waves) are manifest as radio and TV broadcast signals, microwave communication signals, light of any color, X-rays, γ-rays, etc. While these waves have different sources and methods of generation and require different kinds of detectors, they do share some general characteristics. They differ from each other only in the value of their frequencies. Indeed, it was one of the greatest intellectual achievements of the 19th century when Maxwell developed the system of equations, now named in his honor, to describe these waves' commonality. The most important of these properties is that they all travel in a vacuum with, what is called, the speed of light $c$ ($c = 3 \times 10^8$ m/s). The detailed study of these waves is the subject of many electrophysics subspecialties. + +Electromagnetic waves are traveling waves. To understand their mathematical nature, consider a typical expression for the electric field associated with such waves: + +$$E(z, t) = E_0 \cos[kz - \omega t] \qquad (6.73)$$ + +Here, $E_0$ is the amplitude of the wave, $z$ is the spatial coordinate parallel to the direction of propagation of the wave, and $k$ is the wavenumber. \ No newline at end of file diff --git a/samples/texts/348597/page_182.md b/samples/texts/348597/page_182.md new file mode 100644 index 0000000000000000000000000000000000000000..0198e3e78b0592a93f50c161ea85e631277d8bba --- /dev/null +++ b/samples/texts/348597/page_182.md @@ -0,0 +1,30 @@ +Note that if we plot the field for a fixed time, for example, at $t = 0$, the field takes the shape of a sinusoidal function in space: + +$$E(z, t=0) = E_0 \cos[kz] \quad (6.74)$$ + +From the above equation, one deduces that the wavenumber $k = 2\pi/\lambda$, where $\lambda$ is the wavelength of the wave (i.e., the length after which the wave shape reproduces itself). + +Now let us look at the field when an observer, located at $z = 0$, would measure it as a function of time. Then: + +$$E(z=0, t) = E_0 \cos[\omega t] \quad (6.75)$$ + +The temporal period, that is, the time after which the wave shape reproduces itself, is $T = \frac{2\pi}{\omega}$, where $\omega$ is the angular frequency of the wave. + +Next, we want to relate the wavenumber to the angular frequency. To do that, consider an observer located at $z = 0$. The observer measures the field at $t = 0$ to be $E_0$. At time $\Delta t$ later, he should measure the same field, whether he uses Eq. (6.74) or (6.75) if he takes $\Delta z = c\Delta t$, the distance that the wave crest has moved, and where $c$ is the speed of propagation of the wave. From this, one deduces that the wavenumber and the angular frequency are related by $kc = \omega$. This relation holds true for all electromagnetic waves; that is, as the frequency increases, the wavelength decreases. + +If two traveling waves have the same amplitude and frequency, but one is traveling to the right while the other is traveling to the left, the result is a standing wave. The following program permits visualization of this standing wave. + +```matlab +x=0:0.01:5; +a=1; +k=2*pi; +w=2*pi; +t=0:0.05:2; +M=moviein(41); +for m=1:41; + z1=cos(k*x-w*t(m)); + z2=cos(k*x+w*t(m)); + z=z1+z2; + plot(x,z,'r'); + axis([0 5 -3 3]); +``` \ No newline at end of file diff --git a/samples/texts/348597/page_183.md b/samples/texts/348597/page_183.md new file mode 100644 index 0000000000000000000000000000000000000000..c087b58b5570567bce29972982a3af85469c6d2e --- /dev/null +++ b/samples/texts/348597/page_183.md @@ -0,0 +1,36 @@ +M(:,m)=getframe; +end +movie(M,20) + +Compare the spatio-temporal profile of the resultant to that for a single wave (i.e., set x2 = 0). + +6.7.2 Addition of Two Electromagnetic Waves + +In many practical instances, we are faced with the problem that two em +waves originating from the same source, but following different spatial +paths, meet again at a certain position. We want to find the total field at this +position resulting from adding the two waves. We first note that, in the sim- +plest case where the amplitude of the two fields are kept equal, the effect of +the different paths is only to dephase one of the waves from the other by an +amount: Δφ = kΔl, where Δl is the path difference. In effect, the total field is +given by: + +$$ +E_{\text{tot}}(t) = E_0 \cos[\omega t + \phi_1] + E_0 \cos[\omega t + \phi_2] \quad (6.76) +$$ + +where $\Delta\phi = \phi_1 - \phi_2$. This form is similar to those studied in the addition of two phasors and we will hence describe the problem in this language. + +The resultant phasor is + +$$ +\tilde{E}_{\text{tot.}} = \tilde{E}_1 + \tilde{E}_2 \tag{6.77} +$$ + +Preparatory Exercise + +Pb. 6.43 Find the modulus and the argument of the resultant phasor given in Eq. (6.74) as a function of E₀ and Δφ. From this expression, deduce the relation that relates the path difference corresponding to when the resultant phasor has maximum magnitude and that when its magnitude is a minimum. The curve describing the modulus square of the resultant phasor is what is commonly referred to as the interference pattern of two waves. + +**6.7.3 Generalization to N-waves** + +The addition of electromagnetic waves can be generalized to *N*-waves. \ No newline at end of file diff --git a/samples/texts/348597/page_184.md b/samples/texts/348597/page_184.md new file mode 100644 index 0000000000000000000000000000000000000000..f0e6bf7ce123155ee2eb79f082bf6ba4c51444e1 --- /dev/null +++ b/samples/texts/348597/page_184.md @@ -0,0 +1,25 @@ +**Example 6.9** + +Find the resultant field of equal-amplitude *N*-waves, each phase-shifted from the preceding by the same $\Delta\phi$. + +**Solution:** The problem consists of computing an expression of the following kind: + +$$ \tilde{E}_{\text{tot.}} = \tilde{E}_1 + \tilde{E}_2 + \dots + \tilde{E}_n = E_0(1 + e^{j\Delta\phi} + e^{j2\Delta\phi} + \dots + e^{j(N-1)\Delta\phi}) \quad (6.78) $$ + +We have encountered such an expression previously. This sum is that corresponding to the sum of a geometric series. Computing this sum, the modulus square of the resultant phasor is + +$$ \begin{align} |\tilde{E}_{\text{tot.}}|^2 &= E_0^2 \frac{(1 - e^{jN\Delta\phi})}{(1 - e^{j\Delta\phi})} \frac{(1 - e^{-jN\Delta\phi})}{(1 - e^{-j\Delta\phi})} \tag{6.79} \\ &= E_0^2 \left( \frac{1 - \cos(N\Delta\phi)}{1 - \cos(\Delta\phi)} \right) = E_0^2 \left( \frac{\sin^2(N\Delta\phi / 2)}{\sin^2(\Delta\phi / 2)} \right) \nonumber \end{align} $$ + +Because the source is the same for each of the components, the modulus of each phasor is related to the source amplitude by $E_0 = E_{\text{source}}/N$. It is usually as function of the source field that the results are expressed. + +**In-Class Exercises** + +**Pb. 6.44** Plot the normalized square modulus of the resultant of *N*-waves as a function of $\Delta\phi$ for different values of *N* (5, 50, and 500) over the interval $-\pi < \Delta\phi < \pi$. + +**Pb. 6.45** Find the dependence of the central peak value of Eq. (6.79) on *N*. + +**Pb. 6.46** Find the phase shift that corresponds to the position of the first minimum of Eq. (6.79). + +**Pb. 6.47** Find in Eq. (6.79) the relative height of the first maximum (i.e., the one following the central maximum) to that of the central maximum as a function of *N*. + +**Pb. 6.48** In an antenna array with the field representing *N* aligned, equally spaced individual antennae excited by the same source is given by Eq. (6.78). If the line connecting the point of observation to the center of the array is making an angle $\theta$ with the antenna array, the phase shift is $\Delta\phi = \frac{2\pi}{\lambda} d \cos(\theta)$, \ No newline at end of file diff --git a/samples/texts/348597/page_185.md b/samples/texts/348597/page_185.md new file mode 100644 index 0000000000000000000000000000000000000000..76f8b05c5f2874a00b2e440a50dea5ea4049e389 --- /dev/null +++ b/samples/texts/348597/page_185.md @@ -0,0 +1,29 @@ +where $\lambda$ is the wavelength of radiation and $d$ is the spacing between two consecutive antennae. Draw the polar plot of the total intensity as function of the angle $\theta$ for a spacing $d = \lambda/2$ for different values of $N$ (2, 4, 6, and 10). + +**Pb. 6.49** Do the results of **Pb. 6.48** suggest to you a strategy for designing a multi-antenna system with sharp directivity? Can you think of a method, short of moving the antennae around, that permits this array to sweep a range of angles with maximum directivity? + +**Pb. 6.50** The following program simulates a 25-element array-swept radar beam. + +```matlab +th=0:0.01:pi; +t=-0.5*sqrt(3):0.05*sqrt(3):0.5*sqrt(3); +N=25; +M=moviein(21); +for m=1:21; +I=(1/N^2)*(sin(N*((pi/4)*cos(th)+(pi/4)*t(m))))... + ^2)./((sin((pi/4)*cos(th)+(pi/4)*t(m))).^2); +polar(th,I); +M(:,m)=getframe; +end +movie(M,10) +``` + +a. Determine the range of the sweeping angle. + +b. Can you think of an electronic method for implementing this task? + +## 6.8 Solving ac Circuits with Phasors: The Impedance Method + +In Section 6.5, we examined the conventional technique for solving some simple ac circuits problems. We suggested that using phasors may speed up the determination of the solution. This is the subject of this chapter section. + +We will treat, using this technique, the simple *RLC* circuit already solved through other means in order to give you a measure of the simplifications that can be achieved in circuit analysis through this technique. We then proceed to use the phasor technique to investigate another circuit configuration: the infinite *LC* ladder. The power of the phasor technique will also be put to use when we, topologically, solve much more difficult circuit problems than the one-loop category encountered thus far. Essentially, a straightforward \ No newline at end of file diff --git a/samples/texts/348597/page_186.md b/samples/texts/348597/page_186.md new file mode 100644 index 0000000000000000000000000000000000000000..3cb1023c66c878baf021965052ca3b8f43e59c57 --- /dev/null +++ b/samples/texts/348597/page_186.md @@ -0,0 +1,31 @@ +algebraic technique can give the voltages and currents for any circuit. We illustrate this latter case in Chapter 8. + +Recalling that the voltage drops across resistors, inductors, and capacitors can all be expressed as function of the current, its derivative, and its integral, our goal is to find a technique to replace these operators by simple algebraic operations. The key to achieving this goal is to realize that: + +If: + +$$I = I_0 \cos(\omega t + \phi) = \operatorname{Re}[e^{j\omega t} (I_0 e^{j\phi})] \qquad (6.80)$$ + +Then: + +$$\frac{dI}{dt} = -I_0 \omega \sin(\omega t + \phi) = \operatorname{Re}[e^{j\omega t} (I_0 (j\omega) e^{j\phi})] \qquad (6.81)$$ + +and + +$$\int I dt = \frac{I_0}{\omega} \sin(\omega t + \phi) = \operatorname{Re} \left[ e^{j\omega t} \left( I_0 \left( \frac{1}{j\omega} \right) e^{j\phi} \right) \right] \qquad (6.82)$$ + +From Eqs. (4.25) to (4.27) and Eqs. (6.80) to (6.82), we can deduce that the phasors representing the voltages across resistors, inductors, and capacitors can be written as follows: + +$$\tilde{V}_R = \tilde{I}R = \tilde{I}Z_R \qquad (6.83)$$ + +$$\tilde{V}_L = \tilde{I}(j\omega L) = \tilde{I}Z_L \qquad (6.84)$$ + +$$\tilde{V}_C = \frac{\tilde{I}}{(j\omega C)} = \tilde{I}Z_C \qquad (6.85)$$ + +The terms multiplying the current phasor on the RHS of each of the above equations are called the resistor, the inductor, and the capacitor impedances, respectively. + +### 6.8.1 RLC Circuit Phasor Analysis + +Let us revisit this problem first discussed in Section 4.7. Using Kirchoff's voltage law and Eqs. (6.83) to (6.85), we can write the following relation between the phasor of the current and that of the source potential: + +$$\tilde{V}_s = \tilde{I}R + \tilde{I}(j\omega L) + \frac{\tilde{I}}{(j\omega C)} = \tilde{I}\left[R + j\omega L + \frac{1}{j\omega C}\right] \qquad (6.86)$$ \ No newline at end of file diff --git a/samples/texts/348597/page_187.md b/samples/texts/348597/page_187.md new file mode 100644 index 0000000000000000000000000000000000000000..dcd1d0ebd8fa0af49d2ae82f3b609c42bdc06797 --- /dev/null +++ b/samples/texts/348597/page_187.md @@ -0,0 +1,23 @@ +That is, we can immediately compute the modulus and the argument of the phasor of the current if we know the values of the circuit components, the source voltage phasor, and the frequency of the source. + +## In-Class Exercises + +Using the expression for the circuit resonance frequency $\omega_0$ previously introduced in **Pb. 6.32**, for the RLC circuit: + +**Pb. 6.51** Show that the system's total impedance can be written as: + +$$Z = R + j\omega_0 L \left( v - \frac{1}{v} \right), \quad \text{where} \quad v = \frac{\omega}{\omega_0} = \omega\sqrt{LC}$$ + +**Pb. 6.52** Show that $Z(v) = \bar{Z}(1/v)$; and from this result, deduce the value of $v$ at which the impedance is entirely real. + +**Pb. 6.53** Find the magnitude and the phase of the total impedance. + +**Pb. 6.54** Selecting for the values of the circuit elements $LC = 1$, $RC = 3$, and $\omega = 1$, compare the results that you obtain through the phasor analytical method with the numerical results for the voltage across the capacitor in an RLC circuit that you found while solving Eq. (4.36). + +## The Transfer Function + +As you would have discovered solving **Pb. 6.54**, the ratio of the phasor of the potential difference across the capacitor with that of the ac source can be directly calculated once the value of the current phasor is known. This ratio is called the Transfer Function for this circuit if the voltage across the capacitor is taken as the output of this circuit. It is obtained by combining Eqs. (6.85) and (6.86) and is given by: + +$$\frac{\tilde{V}_c}{\tilde{V}_s} = \frac{1}{(j\omega RC - \omega^2 LC + 1)} = H(\omega) \qquad (6.87)$$ + +The Transfer Function concept can be generalized to any ac circuit. It refers to the ratio of the output voltage phasor to the input voltage phasor. It incorporates all the relevant information on the details of the circuit. It is the standard form for representing the response of a circuit to a single sinusoidal function input. \ No newline at end of file diff --git a/samples/texts/348597/page_188.md b/samples/texts/348597/page_188.md new file mode 100644 index 0000000000000000000000000000000000000000..0a5caa36752e9ede8a20205f9c30d3b36c2a7f81 --- /dev/null +++ b/samples/texts/348597/page_188.md @@ -0,0 +1,26 @@ +## *Homework Problem* + +**Pb. 6.55** Plot the magnitude and the phase of the Transfer Function given in Eq. (6.87) as a function of $\omega$, for $LC = 1, RC = 3$. + +### 6.8.2 The Infinite LC Ladder + +The LC ladder consists of an infinite repetition of the basic elements shown in Figure 6.2. + +FIGURE 6.2 +The circuit of an infinite LC ladder. + +Using the definition of impedances, the phasors of the $n$ and $(n + 1)$ voltages and currents are related through: + +$$ \tilde{V}_n - \tilde{V}_{n+1} = Z_1 \tilde{I}_n \quad (6.88) $$ + +$$ \tilde{V}_{n+1} = (\tilde{I}_n - \tilde{I}_{n+1})Z_2 \quad (6.89) $$ + +From Eq. (6.88), we deduce the following expressions for $\tilde{I}_n$ and $\tilde{I}_{n+1}$: + +$$ \tilde{I}_n = \frac{\tilde{V}_n - \tilde{V}_{n+1}}{Z_1} \quad (6.90) $$ + +$$ \tilde{I}_{n+1} = \frac{\tilde{V}_{n+1} - \tilde{V}_{n+2}}{Z_1} \quad (6.91) $$ + +Substituting these values for the currents in Eq. (6.89), we deduce a second-order difference equation for the voltage phasor: + +$$ \tilde{V}_{n+2} - \left( \frac{Z_1}{Z_2} + 2 \right) \tilde{V}_{n+1} + \tilde{V}_n = 0 \quad (6.92) $$ \ No newline at end of file diff --git a/samples/texts/348597/page_189.md b/samples/texts/348597/page_189.md new file mode 100644 index 0000000000000000000000000000000000000000..b26351f8d9660ae2537e3bf88392b9befd81c2fa --- /dev/null +++ b/samples/texts/348597/page_189.md @@ -0,0 +1,23 @@ +The solution of this difference equation can be directly obtained by the techniques discussed in Chapter 2 for obtaining solutions of homogeneous difference equations. The physically meaningful solution is given by: + +$$ \lambda = 1 + \frac{1}{Z_2} \left\{ \frac{Z_1}{2} - \sqrt{\frac{Z_1^2}{4} + Z_2 Z_1} \right\} \quad (6.93) $$ + +and the voltage phasor at node $n$ is then given by: + +$$ \tilde{V}_n = \tilde{V}_s \lambda^n \quad (6.94) $$ + +We consider the model where $Z_1 = j\omega L$ and $Z_2 = 1/(j\omega C)$, respectively, for an inductor and a capacitor. The expression for $\lambda$ then takes the following form: + +$$ \lambda = \left(1 - \frac{v^2}{2}\right) - j\left(\frac{v^2}{4} - \frac{v^4}{4}\right)^{1/2} \quad (6.95) $$ + +where the normalized frequency is defined by $v = \omega / \omega_0 = \omega\sqrt{LC}$. We plot in Figure 6.3 the magnitude and the phase of the root $\lambda$ as function of the normalized frequency. + +As can be directly observed from an examination of Figure 6.3, the magnitude of $\lambda$ is equal to 1 (i.e., the magnitude of $\tilde{V}_n$ is also 1) for $v < v_{cutoff} = 2$, while it drops precipitously after that, with the dropoff in the potential much steeper with increasing node number. Physically, this represents extremely short penetration through the ladder for signals with frequencies larger than the cutoff frequency. Furthermore, note that for $v < v_{cutoff} = 2$, the phase of $\tilde{V}_n$ increases linearly with the index $n$; and because it is negative, it corresponds to a delay in the signal as it propagates down the ladder, which corresponds to a finite velocity of propagation for the signal. + +Before we leave this ladder circuit, it is worth addressing a practical concern. While it is impossible to realize an infinite-dimensional ladder, the above conclusions do not change by much if we replace the infinite ladder by a finite ladder and we terminate it after awhile by a resistor with resistance equal to $\sqrt{L/C}$. + +## In-Class Exercise + +**Pb. 6.56** Repeat the analysis given above for the LC ladder circuit, if instead we were to: + +a. Interchange the positions of the inductors and the capacitors in the ladder circuit. Based on this result and the above LC result, can you design a bandpass filter with a flat response? \ No newline at end of file diff --git a/samples/texts/348597/page_19.md b/samples/texts/348597/page_19.md new file mode 100644 index 0000000000000000000000000000000000000000..72d47bc425350f88e59c9b9e77860d48b590f072 --- /dev/null +++ b/samples/texts/348597/page_19.md @@ -0,0 +1,34 @@ +`y2=4;` +`plot(x2,y2,'o')` +`hold off` + +The **hold off** turns off the **hold on** feature. + +NOTES + +1. There is no limit to the number of plot commands you can type before the hold is turned off. + +2. An alternative method for viewing multiple points on the same graph is available: we may instead, following the entering of the values of `x1`, `y1`, `x2`, `y2`, enter: + +`plot(x1,y1,'*',x2,y2,'o')` + +This has the advantage, in MATLAB, of assigning automatically a different color to each point. + +### 1.3.1 Axes Commands + +You may have noticed that MATLAB automatically adjusts the scale on a graph to accommodate the coordinates of the points being plotted. The axis scaling can be manually enforced by using the command `axis([xmin xmax ymin ymax])`. Make sure that the minimum axis value is less than the maximum axis value or an error will result. + +In addition to being able to adjust the scale of a graph, you can also change the aspect ratio of the graphics window. This is useful when you wish to see the correct *x* to *y* scaling. For example, without this command, a circle will look more like an ellipse. + +**Example 1.5** + +Plot the vertices of a square, keeping the geometric proportions unaltered. + +**Solution:** Enter the following: + +`x1=-1;y1=-1;x2=1;y2=-1;x3=-1;y3=1;x4=1;y4=1;` +`plot(x1,y1,'o',x2,y2,'o',x3,y3,'o',x4,y4,'o')` +`axis([-2 2 -2 2])` +`axis square %square shape` + +Note that prior to the **axis square** command, the square looked like a rectangle. If you want to go back to the default aspect ratio, type **axis normal**. The `%` symbol is used so that you can type comments in your program. Comments following the `%` symbol are ignored by the MATLAB interpreter. \ No newline at end of file diff --git a/samples/texts/348597/page_190.md b/samples/texts/348597/page_190.md new file mode 100644 index 0000000000000000000000000000000000000000..4b7cadbcdff732300e6aaa90817266a2ee369215 --- /dev/null +++ b/samples/texts/348597/page_190.md @@ -0,0 +1,12 @@ +b. Interchange the inductor elements by resistors. In particular, compute the input impedance of this circuit. + +FIGURE 6.3 +The magnitude (left panel) and the phase (right panel) of the characteristic root of the infinite LC ladder. + +## 6.9 Transfer Function for a Difference Equation with Constant Coefficients* + +In Section 6.8.1, we found the Transfer Function for what essentially was a simple ODE. In this section, we generalize the technique to find the Transfer Function of a difference equation with constant coefficients. The form of the difference equation is given by: + +$$y(k) = b_0 u(k) + b_1 u(k-1) + \dots + b_m u(k-m) \\ - a_1 y(k-1) - a_2 y(k-2) - \dots - a_n y(k-n) \tag{6.96}$$ + +Along the same route that we followed in the phasor treatment of ODE, assume that both the input and output are of the form: \ No newline at end of file diff --git a/samples/texts/348597/page_191.md b/samples/texts/348597/page_191.md new file mode 100644 index 0000000000000000000000000000000000000000..211101fcbfb940e57e158e7b7999a01e206adeea --- /dev/null +++ b/samples/texts/348597/page_191.md @@ -0,0 +1,23 @@ +$$u(k) = Ue^{j\Omega k} \quad \text{and} \quad y(k) = Y e^{j\Omega k} \tag{6.97}$$ + +where $\Omega$ is a normalized frequency; typically, in electrical engineering applications, the real frequency multiplied by the sampling time. Replacing these expressions in the difference equation, we obtain: + +$$\frac{\Upsilon}{U} = \frac{\sum_{l=0}^{m} b_l e^{-j\Omega l}}{1 + \sum_{l=1}^{n} a_l e^{-j\Omega l}} = \frac{\sum_{l=0}^{m} b_l z^{-l}}{1 + \sum_{l=1}^{n} a_l z^{-l}} \equiv H(z) \tag{6.98}$$ + +where, by convention, $z = e^{j\Omega}$. + +**Example 6.10** + +Find the Transfer Function of the following difference equation: + +$$y(k) = u(k) + \frac{2}{3}y(k-1) - \frac{1}{3}y(k-2) \tag{6.99}$$ + +Solution: By direct substitution into Eq. (6.98), we find: + +$$H(z) = \frac{1}{1 - \frac{2}{3}z^{-1} + \frac{1}{3}z^{-2}} = \frac{z^2}{z^2 - \frac{2}{3}z + \frac{1}{3}} \tag{6.100}$$ + +It is to be noted that the Transfer Function is a ratio of two polynomials. The zeros of the numerator are called the zeros of the Transfer Function, while the zeros of the denominator are called its poles. If the coefficients of the difference equations are real, then by the Fundamental Theorem of Algebra, the zeros and the poles are either real or are pairs of complex conjugate numbers. + +The Transfer Function fully describes any linear system. As will be shown in linear systems courses, the z-transform of the Transfer Function gives the weights for the solution of the difference equation, while the values of the poles of the Transfer Function determine what are called the system modes of the solution. These are the modes intrinsic to the circuit, and they do not depend on the specific form of the input function. + +Furthermore, it is worth noting that the study of recursive filters, the backbone of digital signal processing, can be simply reduced to a study of the Transfer Function under different configurations. In Applications 2 and 3 that follow, we briefly illustrate two particular digital filters in wide use. \ No newline at end of file diff --git a/samples/texts/348597/page_192.md b/samples/texts/348597/page_192.md new file mode 100644 index 0000000000000000000000000000000000000000..69d37b11e071c6facdd74a246b76827f9f5c6ae9 --- /dev/null +++ b/samples/texts/348597/page_192.md @@ -0,0 +1,48 @@ +**Application 1** + +Using the Transfer Function formalism, we want to estimate the accuracy of +the three integrating schemes discussed in Chapter 4. We want to compare +the Transfer Function of each of those algorithms to that of the exact result, +obtained upon integrating exactly the function e$^{j\omega t}$. + +The exact result for integrating the function $e^{j\omega t}$ is, of course, $\frac{e^{j\omega t}}{j\omega}$, thus giving for the exact Transfer Function for integration the expression: + +$$ +H_{\text{exact}} = \frac{1}{j\omega} \tag{6.101} +$$ + +Before proceeding with the computation of the transfer function for the dif- +ferent numerical schemes, let us pause for a moment and consider what we +are actually doing when we numerically integrate a function. We go through +the following steps: + +1. We discretize the time interval over which we integrate; that is, we define the sampling time $\Delta t$, such that the discrete points abscissa are given by $k(\Delta t)$, where k is an integer. + +2. We write a difference equation for the integral relating its values at the discrete points with its values and that of the integrand at discrete points with equal or smaller indices. + +3. We obtain the value of the integral by iterating the defining differ- +ence equation. + +The test function used for the estimation of the integration methods accu- +racy is written at the discrete points as: + +$$ +y(k) = e^{j k \omega (\Delta t)} \tag{6.102} +$$ + +The difference equations associated with each of the numerical integration +schemes are: + +$$ +I_T(k+1) = I_T(k) + \frac{\Delta t}{2}(y(k+1) + y(k)) \quad (6.103) +$$ + +$$ +I_{MP}(k+1) = I_{MP}(k) + \Delta t y(k+1/2) \quad (6.104) +$$ + +$$ +I_S(k+1) = I_S(k-1) + \frac{\Delta t}{3} (y(k+1) + 4y(k) + y(k-1)) \quad (6.105) +$$ + +leading to the following expressions for the respective Transfer Functions: \ No newline at end of file diff --git a/samples/texts/348597/page_193.md b/samples/texts/348597/page_193.md new file mode 100644 index 0000000000000000000000000000000000000000..06ff3b4f705bdabe9fc7c1087d1777aa294da22e --- /dev/null +++ b/samples/texts/348597/page_193.md @@ -0,0 +1,24 @@ +$$H_T = \frac{\Delta t}{2} \frac{e^{j\omega(\Delta t)} + 1}{e^{j\omega(\Delta t)} - 1} \quad (6.106)$$ + +$$H_{MP} = \Delta t \frac{e^{j\omega(\Delta t)/2}}{e^{j\omega(\Delta t)} - 1} \quad (6.107)$$ + +$$H_S = \frac{\Delta t (e^{j\omega(\Delta t)} + 4 + e^{-j\omega(\Delta t)})}{3 \left(e^{j\omega(\Delta t)} - e^{-j\omega(\Delta t)}\right)} \quad (6.108)$$ + +The measures of accuracy of the integration scheme are the ratios of these Transfer Functions to that of the exact expression. These are given, respectively, by: + +$$R_T = \frac{(\omega \Delta t / 2)}{\sin(\omega \Delta t / 2)} \cos(\omega \Delta t / 2) \quad (6.109)$$ + +$$R_{MP} = \frac{(\omega \Delta t / 2)}{\sin(\omega \Delta t / 2)} \quad (6.110)$$ + +$$R_S = \left( \frac{\omega \Delta t}{3} \right) \frac{\cos(\omega \Delta t) + 2}{\sin(\omega \Delta t)} \quad (6.111)$$ + +Table 6.1 gives the value of this ratio as a function of the number of sampling points, per oscillation period, selected in implementing the different integration subroutines: + +TABLE 6.1 +Accuracy of the Different Elementary Numerical Integrating Methods + +
Number of SamplingPoints in a PeriodRTRMPRS
1000.99971.00021.0000
500.99861.00071.0000
400.99781.00111.0000
300.99611.00201.0000
200.99091.00461.0001
100.95911.02061.0014
50.78541.11071.0472
+ +As can be noted, the error is less than 1% for any of the discussed methods as long as the number of points in one oscillation period is larger than 20, although the degree of accuracy is best, as we expected based on geometrical arguments, for Simpson's rule. + +In a particular application, where a finite number of frequencies are simultaneously present, the choice of $(\Delta t)$ for achieving a specified level of accuracy \ No newline at end of file diff --git a/samples/texts/348597/page_194.md b/samples/texts/348597/page_194.md new file mode 100644 index 0000000000000000000000000000000000000000..724d190e5800c9762d6d39c4897db88c0b76f06e --- /dev/null +++ b/samples/texts/348597/page_194.md @@ -0,0 +1,51 @@ +in the integration subroutine should ideally be determined using the shortest +of the periods present in the integrand. + +**Application 2** + +As mentioned earlier, the Transfer Function technique is the prime tool for +the analysis and design of digital filters. In this and the following application, +we illustrate its use in the design of a low-pass digital filter and a digital pro- +totype bandpass filter. + +The low-pass filter, as its name indicates, filters out the high-frequency +components from a signal. + +Its defining difference equation is given by: + +$$ +y(k) = (1-a)y(k-1) + au(k) \tag{6.112} +$$ + +giving for its Transfer Function the expression: + +$$ +H(z) = \frac{a}{1 - (1-a)z^{-1}} \tag{6.113} +$$ + +Written as a function of the normalized frequency, it is given by: + +$$ +H(e^{j\Omega}) = \frac{ae^{j\Omega}}{e^{j\Omega} - (1-a)} \quad (6.114) +$$ + +We plot, in Figure 6.4, the magnitude and the phase of the transfer function as a function of the normalized frequency for the value of *a* = 0.1. Note that the gain is equal to 1 for Ω = 0, and decreases monotonically thereafter. + +To appreciate the operation of this filter, consider a sinusoidal signal that has +been contaminated by the addition of noise. We can simulate the noise by add- +ing to the original signal an array consisting of random numbers with maxi- +mum amplitude equal to 20% of the original signal. The top panel of Figure +6.5 represents the contaminated signal. If we pass this signal through a low- +pass filter, the lower panel of Figure 6.5 shows the outputted filtered signal. + +As can be observed, the noise, which is a high-frequency signal, has been +filtered out and the signal shape has been almost restored to its original shape +before that noise was added. + +The following *script M-file* simulates the above operations: + +t=linspace(0,4*pi,300); +N=T-length(T); +s=sin(T); +n=0.3*rand(1,N); +u=s+n; \ No newline at end of file diff --git a/samples/texts/348597/page_195.md b/samples/texts/348597/page_195.md new file mode 100644 index 0000000000000000000000000000000000000000..e034c00d9a9c1b9b1bb7ad803bda0e1835be1fa4 --- /dev/null +++ b/samples/texts/348597/page_195.md @@ -0,0 +1,5 @@ +FIGURE 6.4 +The gain (top panel) and phase (bottom panel) responses of a low-pass filter as a function of the frequency. + +FIGURE 6.5 +The action of a low-pass filter. Top panel: Profile of the signal contaminated by noise. Bottom panel: Profile of the filtered signal. \ No newline at end of file diff --git a/samples/texts/348597/page_196.md b/samples/texts/348597/page_196.md new file mode 100644 index 0000000000000000000000000000000000000000..3200a98aad6c6885aed0ab41a883de93f583e010 --- /dev/null +++ b/samples/texts/348597/page_196.md @@ -0,0 +1,39 @@ +```matlab +y(1)=u(1); +for k=2:N + y(k)=+0.9*y(k-1)+0.1*u(k); +end +subplot(2,1,1) +plot(t,u) +axis([0 4*pi -1.5 1.5]); +title('Noisy Signal') +subplot(2,1,2) +plot(t,y) +title('Filtered Signal') +axis([0 4*pi -1.5 1.5]); +``` + +## Application 3 + +The digital prototype bandpass filter ideally filters out from a signal all frequencies lower than a given frequency and higher than another frequency. In practice, the cutoffs are not so sharp and the lower and higher cut-off frequencies of the bandpass are defined as those at which the gain curve (i.e., the magnitude of the Transfer Function as function of the frequency) is at $(1/\sqrt{2})$ its maximum value. + +The difference equation that describes this prototype filter is + +$$ +\begin{aligned} +y(k) = & \{(1-r)\sqrt{1-2r\cos(2\Omega_0)} + r^2\}u(k) \\ + & + 2r\cos(\Omega_0)y(k-1) - r^2y(k-2) +\end{aligned} +\quad (6.115) $$ + +where $\Omega_0$ is the normalized frequency with maximum gain and $r$ is a number close to 1. + +The purpose of the following analysis is, given the lower and higher cutoff normalized frequencies, to find the quantities $\Omega_0$ and $r$ in the above difference equation. + +The Transfer Function for the above difference equation is given by: + +$$ H(z) = \frac{g_0 z^2}{z^2 - 2rz\cos(\Omega_0)z + r^2} \quad (6.116) $$ + +where + +$$ g_0 = (1-r)\sqrt{1 - 2r\cos(2\Omega_0) + r^2} \quad (6.117) $$ \ No newline at end of file diff --git a/samples/texts/348597/page_197.md b/samples/texts/348597/page_197.md new file mode 100644 index 0000000000000000000000000000000000000000..920b119f91e2418d51429cb1438d59b0d5ac6d34 --- /dev/null +++ b/samples/texts/348597/page_197.md @@ -0,0 +1,32 @@ +and + +$$z = e^{j\Omega}$$ + +The gain of this filter, or equivalently the magnitude of the Transfer Function, is + +$$|H(e^{j\Omega})| = \frac{(1-r)\sqrt{1-2r\cos(2\Omega_0)+r^2}}{(1+Ar+Br^2+Ar^3+r^4)} \quad (6.118)$$ + +where + +$$A = -4 \cos(\Omega) \cos(\Omega_0) \quad (6.119)$$ + +$$B = 4\cos^2(\Omega) + 4\cos^2(\Omega_0) - 2 \quad (6.120)$$ + +The lower and upper cutoff frequencies are defined, as previously noted, by +the condition: + +$$|H(e^{j\Omega_{(1,2)}})| = \frac{1}{\sqrt{2}} \qquad (6.121)$$ + +Substituting condition (6.121) in the gain expression (6.118) leads to the conclusion that the cutoff frequencies are obtained from the solutions of the following quadratic equation: + +$$\begin{aligned} & \cos^2(\Omega) - \left[ \frac{(1+r^2)\cos(\Omega_0)}{r} \right] \cos(\Omega) \\ & + \frac{(1-r)^2}{4r^2} \left[ 4r\cos(2\Omega_0) - (1-r)^2 \right] + \cos^2(\Omega_0) = 0 \end{aligned} \quad (6.122)$$ + +Adding and subtracting the roots of this equation, we deduce after some straightforward algebra, the following determining equations for $\Omega_0$ and $r$: + +1. $r$ is the root in the interval $[0, 1]$ of the following eighth-degree polynomial: + +$$r^8 + (a-b)r^6 - 8ar^5 + (14a - 2b - 2)r^4 - 8ar^3 + (a-b)r^2 + 1 = 0 \quad (6.123)$$ + +where + +$$a = (\cos(\Omega_1) + \cos(\Omega_2))^2 \quad (6.124)$$ \ No newline at end of file diff --git a/samples/texts/348597/page_198.md b/samples/texts/348597/page_198.md new file mode 100644 index 0000000000000000000000000000000000000000..4790aab2704a14fb7cc1413211939314ebaeeb9a --- /dev/null +++ b/samples/texts/348597/page_198.md @@ -0,0 +1,36 @@ +$$b = (\cos(\Omega_1) - \cos(\Omega_2))^2 \quad (6.125)$$ + +2. $\Omega_0$ is given by: + +$$\Omega_0 = \cos^{-1} \left[ \frac{ra^{1/2}}{1+r^2} \right] \qquad (6.126)$$ + +**Example 6.12** + +Write a program to determine the parameters *r* and $\Omega_0$ of a prototype band-pass filter if the cutoff frequencies and the sampling time are given. + +*Solution:* The following *script M-file* implements the above target: + +```matlab +f1= ; %enter the lower cutoff +f2= ; %enter the upper cutoff +tau= ; %enter the sampling time +w1=2*pi*f1*tau; +w2=2*pi*f2*tau; +a=(cos(w1)+cos(w2))^2; +b=(cos(w1)-cos(w2))^2; +p=[1 0 a-b -8*a 14*a-2*b-2 -8*a a-b 0 1]; +rr=roots(p); +r=rr.find rr>0 & rr<1 & imag rr==0); +w0=acos((r*a^(1/2))/(1+r^2)); +f0=(1/(2*pi*tau))*w0 +``` + +In Figure 6.6, we show the gain and phase response for this filter, for the case that the cutoff frequencies are chosen to be 1000 Hz and 1200 Hz, and the sampling rate is 10 µs. + +To test the action of this filter, we input into it a signal that consists of a mixture of a sinusoid having a frequency at the frequency of the maximum gain of this filter and a number of its harmonics; for example, + +$$u(t) = \sin(2\pi f_0 t) + 0.5 \sin(4\pi f_0 t) + 0.6 \sin(6\pi f_0 t) \quad (6.127)$$ + +We show in Figure 6.7 the input and the filtered signals. As expected from an analysis of the gain curve, only the fundamental frequency signal has survived. The amplitude of the filtered signal settles to that of the fundamental frequency signal following a short transient period. + +**NOTE** Before leaving this topic, it is worth noting that the above prototype bandpass filter can have sharper cutoff features (i.e., decreasing the value of \ No newline at end of file diff --git a/samples/texts/348597/page_199.md b/samples/texts/348597/page_199.md new file mode 100644 index 0000000000000000000000000000000000000000..de28e780401b66fb880acb56f5188bab813f07eb --- /dev/null +++ b/samples/texts/348597/page_199.md @@ -0,0 +1,5 @@ +FIGURE 6.6 +The transfer function of a prototype bandpass filter. Top panel: Plot of the gain curve as function of the normalized frequency. Bottom panel: Plot of the phase curve as function of the normalized frequency. + +FIGURE 6.7 +The filtering action of a prototype bandpass filter. Top panel: Input signal consists of a combination of a fundamental frequency signal (equal to the frequency corresponding to the filter maximum gain) and two of its harmonics. Bottom panel: Filtered signal. \ No newline at end of file diff --git a/samples/texts/348597/page_2.md b/samples/texts/348597/page_2.md new file mode 100644 index 0000000000000000000000000000000000000000..255eb464f37a413069b6ade1d222c6299c03e412 --- /dev/null +++ b/samples/texts/348597/page_2.md @@ -0,0 +1,6 @@ +ELEMENTARY +MATHEMATICAL and +COMPUTATIONAL TOOLS +for ELECTRICAL and +COMPUTER ENGINEERS +USING MATLAB® \ No newline at end of file diff --git a/samples/texts/348597/page_20.md b/samples/texts/348597/page_20.md new file mode 100644 index 0000000000000000000000000000000000000000..8ea8254921780b4a72f2d8fd9616e13cb0fb47b8 --- /dev/null +++ b/samples/texts/348597/page_20.md @@ -0,0 +1,38 @@ +### 1.3.2 Labeling a Graph + +To add labels to your graph, the functions `xlabel`, `ylabel`, and `title` can be used as follows: + +```matlab +xlabel('x-axis') +ylabel('y-axis') +title('points in a plane') +``` + +If you desire to add a caption anywhere in the graph, you can use the MATLAB command `gtext('caption')` and place it at the location of your choice, on the graph, by clicking the mouse when the crosshair is properly centered there. + +### 1.3.3 Plotting a Point in 3-D + +In addition to being able to plot points on a plane (2-D space), MATLAB is also able to plot points in a three-dimensional space (3-D space). For this, we utilize the `plot3` function. + +#### Example 1.6 + +Plot the point P(3, 4, 5). + +*Solution:* Enter the following commands: + +```matlab +x1=3; y1=4; z1=5; +plot3(x1,y1,z1,'*') +``` + +You can also plot multiple points in a 3-D space in exactly the same way as you did on a plane. Axis adjustment can still be used, but the vector input into the `axis` command must now have six entries, as follows: + +```matlab +axis([xmin xmax ymin ymax zmin zmax]) +``` + +You can similarly label your 3-D figure using `xlabel`, `ylabel`, `zlabel`, and `title`. + +## 1.4 M-files + +In the last section, we found that to complete a figure with a caption, we had to enter several commands one by one in the command window. Typing \ No newline at end of file diff --git a/samples/texts/348597/page_200.md b/samples/texts/348597/page_200.md new file mode 100644 index 0000000000000000000000000000000000000000..40d045a4294d292191bd41513d65d264f9e95596 --- /dev/null +++ b/samples/texts/348597/page_200.md @@ -0,0 +1,25 @@ +the gain curve for frequencies below the lower cutoff and higher than the upper cutoff) through having many of these prototype filters in cascade. This will be a topic of study in future linear system or filter design courses. + +## In-Class Exercises + +**Pb. 6.59** Work out the missing algebraic steps in the derivation leading to Eqs. (6.123) through (6.126). + +**Pb. 6.60** Given the following values for the lower and upper cutoff frequencies and the sampling time: + +$$f_1 = 200 \text{ Hz}; f_2 = 400 \text{ Hz}; \tau = 10^{-5} \text{ s}$$ + +find $f_0$ and plot the gain curve as function of the normalized frequency for the bandpass prototype filter. + +## 6.10 MATLAB Commands Review + +**abs** computes the modulus of a complex number. + +**angle** computes the argument of a complex number. + +**conj** computes the complex conjugate of a complex number. + +**find** finds the locations of elements in an array that satisfies certain specified conditions. + +**imag** computes the imaginary part of a complex number. + +**real** computes the real part of a complex number. \ No newline at end of file diff --git a/samples/texts/348597/page_201.md b/samples/texts/348597/page_201.md new file mode 100644 index 0000000000000000000000000000000000000000..42fa0fb0b5f4f83d37efddeb9507c020243c165c --- /dev/null +++ b/samples/texts/348597/page_201.md @@ -0,0 +1,21 @@ +# 7 + +## Vectors + +### 7.1 Vectors in Two Dimensions (2-D) + +A vector in 2-D is defined by its length and the angle it makes with a reference axis (usually the x-axis). This vector is represented graphically by an arrow. The tail of the arrow is called the initial point of the vector and the tip of the arrow is the terminal point. Two vectors are equal when both their length and angle with a reference axis are equal. + +#### 7.1.1 Addition + +The sum of two vectors $\vec{u} + \vec{v} = \vec{w}$ is a vector constructed graphically as follows. At the tip of the first vector, draw a vector equal to the second vector, such that its tail coincides with the tip of the first vector. The resultant vector has as its tail that of the first vector, and as its tip, the tip of the just-drawn second vector (the Parallelogram Rule) (see Figure 7.1). + +The negative of a vector is that vector whose tip and tail have been exchanged from those of the vector. This leads to the conclusion that the difference of two vectors is the other diagonal in the parallelogram (Figure 7.2). + +#### 7.1.2 Multiplication of a Vector by a Real Number + +If we multiply a vector $\vec{v}$ by a real number $k$, the result is a vector whose length is $k$ times the length of $\vec{v}$, and whose direction is that of $\vec{v}$ if $k$ is positive, and opposite if $k$ is negative. + +#### 7.1.3 Cartesian Representation + +It is most convenient for a vector to be described by its projections on the x-axis and on the y-axis, respectively; these are denoted by $(v_1, v_2)$ or $(v_x, v_y)$. In this representation: \ No newline at end of file diff --git a/samples/texts/348597/page_202.md b/samples/texts/348597/page_202.md new file mode 100644 index 0000000000000000000000000000000000000000..80f46b524c21579d8107e2e3ebeda46cdb123214 --- /dev/null +++ b/samples/texts/348597/page_202.md @@ -0,0 +1,9 @@ +FIGURE 7.1 +Sum of two vectors. + +FIGURE 7.2 +Difference of two vectors. + +$$ \vec{u} = (u_1, u_2) = (u_1)\hat{e}_1 + (u_2)\hat{e}_2 \quad (7.1) $$ + +where $\hat{e}_1$ and $\hat{e}_2$ are the unit vectors (length is 1) parallel to the x-axis and y-axis, respectively. In terms of this representation, we can write the zero vector, the sum of two vectors, and the multiplication of a vector by a real number as follows: \ No newline at end of file diff --git a/samples/texts/348597/page_203.md b/samples/texts/348597/page_203.md new file mode 100644 index 0000000000000000000000000000000000000000..7b8d3c2c78d142a83a100477162e75cefd2f1f4f --- /dev/null +++ b/samples/texts/348597/page_203.md @@ -0,0 +1,46 @@ +$$ +\vec{0} = (0, 0) = 0\hat{e}_1 + 0\hat{e}_2 \tag*{(7.2)} +$$ + +$$ +\bar{\bar{u}} + \bar{\bar{v}} = \bar{\bar{w}} = (u_1 + v_1, u_2 + v_2) = (u_1 + v_1)\hat{e}_1 + (u_2 + v_2)\hat{e}_2 \quad (7.3) +$$ + +$$ +k\bar{u} = (ku_1, ku_2) = (ku_1)\hat{e}_1 + (ku_2)\hat{e}_2 \quad (7.4) +$$ + +Preparatory Exercise + +Pb. 7.1 Using the above definitions and properties, prove the following identities: + +$$ +\begin{align*} +\bar{u} + \bar{v} &= \bar{v} + \bar{u} \\ +(\bar{u} + \bar{v}) + \bar{w} &= \bar{u} + (\bar{v} + \bar{w}) \\ +\bar{u} + \bar{0} &= \bar{0} + \bar{u} = \bar{u} \\ +\bar{u} + (-\bar{u}) &= \bar{0} \\ +k(l\bar{u}) &= (kl)\bar{u} \\ +k(\bar{u} + \bar{v}) &= k\bar{u} + k\bar{v} \\ +(k+l)\bar{u} &= k\bar{u} + l\bar{u} +\end{align*} +$$ + +The norm of a vector is the length of this vector. Using the Pythagorean theorem, its square is: + +$$ +\| \vec{u} \|^{2} = u_{1}^{2} + u_{2}^{2} \tag*{(7.5)} +$$ + +and therefore the unit vector in the $\vec{u}$ direction, denoted by $\hat{e}_u$, is given by: + +$$ +\hat{e}_u = \frac{1}{\sqrt{u_1^2 + u_2^2}} (u_1, u_2) \qquad (7.6) +$$ + +All of the above can be generalized to 3-D, or for that matter to n-dimensions. +For example: + +$$ +\hat{e}_n = \frac{1}{\sqrt{u_1^2 + u_2^2 + \dots + u_n^2}} (u_1, u_2, \dots, u_n) \quad (7.7) +$$ \ No newline at end of file diff --git a/samples/texts/348597/page_204.md b/samples/texts/348597/page_204.md new file mode 100644 index 0000000000000000000000000000000000000000..d59dfcb289c39753f775c40fa35adcd0342817ef --- /dev/null +++ b/samples/texts/348597/page_204.md @@ -0,0 +1,37 @@ +### 7.1.4 MATLAB Representation of the Above Results + +MATLAB distinguishes between two kinds of vectors: the column vector and the row vector. As long as the components of the vectors are all real, the difference between the two is in the structure of the array. In the column vector case, the array representation is vertical and in the row vector case, the array representation is horizontal. This distinction is made for the purpose of including in a consistent structure the formulation of the dot product and the definition of matrix multiplication. + +#### Example 7.1 + +Type and execute the following commands, while interpreting the output at each step: + +```matlab +V=[1 3 5 7] +W=[1;3;5;7] +V' +U=3*V +Z=U+V +Y=V+W +``` + +%you cannot add a row vector and a column +vector + +You would have observed that: + +1. The difference in the representation of the column and row vectors is in the manner they are separated inside the square brackets. + +2. The single quotation mark following a vector with real components changes that vector from being a column vector to a row vector, and vice versa. + +3. Multiplying a vector by a scalar simply multiplies each component of this vector by this scalar. + +4. You can add two vectors of the same kind and the components would be adding by pairs. + +5. You cannot add two vectors of different kinds; the computer will give you an error message alerting you that you are adding two quantities of different dimensions. + +The MATLAB command for obtaining the norm of a vector is `norm`. Using this notation, it is a simple matter to define the unit vector in the same direction as a given vector. + +#### Example 7.2 + +Find the length of the vector and the unit vector $u = [1 \ 5 \ 3 \ 2]$ and the unit vector parallel to it. \ No newline at end of file diff --git a/samples/texts/348597/page_205.md b/samples/texts/348597/page_205.md new file mode 100644 index 0000000000000000000000000000000000000000..e37952bd89dcb2d67cd549437cdd27e16ada627e --- /dev/null +++ b/samples/texts/348597/page_205.md @@ -0,0 +1,23 @@ +$$ +\begin{aligned} +u &= [1 \ 5 \ 3 \ 2] \\ +lengthu &= \text{norm}(u) & \quad &\% \text{length of vector } u \\ +unitu &= u / (\text{norm}(u)) & \quad &\% \text{unit vector parallel to } u \\ +lengthunitu &= \text{norm}(unitu) & \quad &\% \text{verify length of unit vector} +\end{aligned} +$$ + +FIGURE 7.3 +The geometry of the generalized Pythagorean theorem. + +## 7.2 Dot (or Scalar) Product + +If the angle between the vectors $\vec{u}$ and $\vec{v}$ is $\theta$, then the dot product of the two vectors is: + +$$ \vec{u} \cdot \vec{v} = \|\vec{u}\| \|\vec{v}\| \cos(\theta) \qquad (7.8) $$ + +The dot product can also be expressed as a function of the vectors components. Referring to Figure 7.3, we know from trigonometry the relation relating the length of one side of a triangle with the length of the other two sides and the cosine of the angle between the other two sides. This relation is the generalized Pythagorean theorem. Referring to Figure 7.3, this gives: + +$$ \|PQ\|^2 = \|\vec{u}\|^2 + \|\vec{v}\|^2 - 2\|\vec{u}\|\|\vec{v}\|\cos(\theta) \quad (7.9) $$ + +but since: \ No newline at end of file diff --git a/samples/texts/348597/page_206.md b/samples/texts/348597/page_206.md new file mode 100644 index 0000000000000000000000000000000000000000..57650c4bafab7072197f39776e7ea99178ff03ea --- /dev/null +++ b/samples/texts/348597/page_206.md @@ -0,0 +1,47 @@ +$$ +\begin{align} +\bar{P}\vec{Q} &= \vec{v} - \bar{u} \tag{7.10} \\ +\Rightarrow \|\bar{u}\| &\neq \|\bar{v}\| \cos(\theta) = \frac{1}{2} (\|\bar{u}\|^2 + \|\bar{v}\|^2 - \|\bar{v} - \bar{u}\|^2) \tag{7.11} +\end{align} +$$ + +and the dot product can be written as: + +$$ +\bar{u} \cdot \bar{v} = \frac{1}{2}(u_1^2 + u_2^2 + v_1^2 + v_2^2 - (v_1 - u_1)^2 - (v_2 - u_2)^2) = u_1v_1 + u_2v_2 \quad (7.12) +$$ + +In an n-dimensional space, the above expression is generalized to: + +$$ +\bar{u} \cdot \bar{v} = u_1 v_1 + u_2 v_2 + \dots + u_n v_n \tag{7.13} +$$ + +and the norm square of the vector can be written as the dot product of the +vector with itself; that is, + +$$ +\|\bar{\mathbf{u}}\|^2 = \bar{\mathbf{u}} \cdot \bar{\mathbf{u}} = u_1^2 + u_2^2 + \dots + u_n^2 \quad (7.14) +$$ + +**Example 7.3** + +Parallelism and orthogonality of two vectors in a plane. Let the vectors $\vec{u}$ and $\vec{v}$ be given by: $\vec{u} = 3\hat{e}_1 + 4\hat{e}_2$ and $\vec{v} = a\hat{e}_1 + 7\hat{e}_2$. What is the value of $a$ if the vectors are parallel, and if the vectors are orthogonal? + +Solution: + +*Case 1:* If the vectors are parallel, this means that they make the same angle with the x-axis. The tangent of this angle is equal to the ratio of the vector x-component to its y-component. This means that: + +$$ +\frac{a}{7} = \frac{3}{4} \Rightarrow a = 21/4 +$$ + +*Case 2:* If the vectors are orthogonal, this means that the angle between them is 90°, and their dot product will be zero because the cosine for that angle is zero. This implies that: + +$$ +3a + 28 = 0 \Rightarrow a = -28/3 +$$ + +**Example 7.4** + +Find the unit vector in 2-D that is perpendicular to the line $ax + by + c = 0$. \ No newline at end of file diff --git a/samples/texts/348597/page_207.md b/samples/texts/348597/page_207.md new file mode 100644 index 0000000000000000000000000000000000000000..2dd83ae69e7c58a52549705cf39fb905b7095fac --- /dev/null +++ b/samples/texts/348597/page_207.md @@ -0,0 +1,33 @@ +**Solution:** Choose two arbitrary points on this line. Denote their coordinates by $(x_1, y_1)$ and $(x_2, y_2)$; being on the line, they satisfy the equation of the line: + +$$ax_1 + by_1 + c = 0$$ + +$$ax_2 + by_2 + c = 0$$ + +Subtracting the first equation from the second equation, we obtain: + +$$a(x_2 - x_1) + b(y_2 - y_1) = 0$$ + +which means that $(a,b) \perp (x_2 - x_1, y_2 - y_1)$, and the unit vector perpendicular to the line is: + +$$\hat{e}_{\perp} = \left( \frac{a}{\sqrt{a^2 + b^2}}, \frac{b}{\sqrt{a^2 + b^2}} \right)$$ + +### Example 7.5 + +Find the angle that the lines $3x + 2y + 2 = 0$ and $2x - y + 1 = 0$ make together. + +**Solution:** The angle between two lines is equal to the angle between their normal unit vectors. The unit vectors normal to each of the lines are, respectively: + +$$\hat{n}_1 = \left( \frac{3}{\sqrt{13}}, \frac{2}{\sqrt{13}} \right) \text{ and } \hat{n}_2 = \left( \frac{2}{\sqrt{5}}, \frac{-1}{\sqrt{5}} \right)$$ + +Having the two orthogonal unit vectors, it is a simple matter to compute the angle between them: + +$$\cos(\theta) = \hat{n}_1 \cdot \hat{n}_2 = \frac{4}{\sqrt{65}} \Rightarrow \theta = 1.0517 \text{ radians}$$ + +### 7.2.1 MATLAB Representation of the Dot Product + +The dot product is written as the product of a row vector by a column vector of the same length. + +#### Example 7.6 + +Find the dot product of the vectors: \ No newline at end of file diff --git a/samples/texts/348597/page_208.md b/samples/texts/348597/page_208.md new file mode 100644 index 0000000000000000000000000000000000000000..f9391d30386ba8b4e6d8adf09bd8e1e092070499 --- /dev/null +++ b/samples/texts/348597/page_208.md @@ -0,0 +1,32 @@ +$$u = [1 \ 5 \ 3 \ 7] \text{ and } v = [2 \ 4 \ 6 \ 8]$$ + +**Solution:** Type and execute each of the following commands, while interpreting each output: + +``` +u = [1 5 3 7] +v = [2 4 6 8] +u*v' +v'u +u*v %you cannot multiply two rows +u'*v +u'u' + norm(u))^2 +``` + +As observed from the above results, in MATLAB, the dot product can be obtained only by the multiplication of a row on the left and a column of the same length on the right. If the order of a row and column are exchanged, we obtain a two-dimensional array structure (i.e., a matrix, the subject of Chapter 8). On the other hand, if we multiply two rows, MATLAB gives an error message about the non-matching of dimensions. + +Observe further, as pointed out previously, the relation between the length of a vector and its dot product with itself. + +## In-Class Exercises + +**Pb. 7.2** Generalize the analytical technique, as previously used in Example 7.4 for finding the normal to a line in 2-D, to find the unit vector in 3-D that is perpendicular to the plane: + +$$ax + by + cz + d = 0$$ + +*(Hint: A vector is perpendicular to a plane if it is perpendicular to two non-collinear vectors in that plane.)* + +**Pb. 7.3** Find, in 2-D, the distance of the point $P(x_0, y_0)$ from the line $ax + by + c = 0$. *(Hint: Remember the geometric definition of the dot product.)* + +**Pb. 7.4** Prove the following identities: + +$$\bar{u} \cdot \bar{v} = \bar{v} \cdot \bar{u}, \quad \bar{u} \cdot (\bar{v} + \bar{w}) = \bar{u} \cdot \bar{v} + \bar{u} \cdot \bar{w}, \quad k \cdot (\bar{u} \cdot \bar{v}) = (k\bar{u}) \cdot \bar{v}$$ \ No newline at end of file diff --git a/samples/texts/348597/page_209.md b/samples/texts/348597/page_209.md new file mode 100644 index 0000000000000000000000000000000000000000..a6b86dceebeea89708788f45ac233182e4e32e63 --- /dev/null +++ b/samples/texts/348597/page_209.md @@ -0,0 +1,29 @@ +## 7.3 Components, Direction Cosines, and Projections + +### 7.3.1 Components + +The *components* of a vector are the values of each element in the defining *n*-tuple representation. For example, consider the vector $\vec{u} = [1 \ 5 \ 3 \ 7]$ in real 4-D. We say that its first, second, third, and fourth components are 1, 5, 3, and 7, respectively. (We are maintaining, in this section, the arrow notation for the vectors, irrespective of the dimension of the space.) + +The simplest basis of a *n*-dimensional vector space is the collection of *n* unit vectors, each having only one of their components that is non-zero and such that the location of this non-zero element is different for each of these basis vectors. This basis is not unique. + +For example, in 4-D space, the canonical four-unit orthonormal basis vectors are given, respectively, by: + +$$ \hat{e}_1 = [1 \ 0 \ 0 \ 0] \qquad (7.15) $$ + +$$ \hat{e}_2 = [0 \ 1 \ 0 \ 0] \qquad (7.16) $$ + +$$ \hat{e}_3 = [0 \ 0 \ 1 \ 0] \qquad (7.17) $$ + +$$ \hat{e}_4 = [0 \ 0 \ 0 \ 1] \qquad (7.18) $$ + +and the vector $\vec{u}$ can be written as a linear combination of the basis vectors: + +$$ \vec{u} = u_1\hat{e}_1 + u_2\hat{e}_2 + u_3\hat{e}_3 + u_4\hat{e}_4 \qquad (7.19) $$ + +The basis vectors are chosen to be orthonormal, which means that in addition to requiring each one of them to have unit length, they are also orthogonal two by two to each other. These properties of the basis vectors leads us to the following important result: the *m*th component of a vector is obtained by taking the dot product of the vector with the corresponding unit vector, that is, + +$$ u_m = \hat{e}_m \cdot \vec{u} \qquad (7.20) $$ + +### 7.3.2 Direction Cosines + +The *direction cosines* are defined by: \ No newline at end of file diff --git a/samples/texts/348597/page_21.md b/samples/texts/348597/page_21.md new file mode 100644 index 0000000000000000000000000000000000000000..a24cd9bf962f3c641ac374d37ff3d477685fed60 --- /dev/null +++ b/samples/texts/348597/page_21.md @@ -0,0 +1,27 @@ +errors will be time-consuming to fix because if you are working in the command window, you need to retype all or part of the program. Even if you do not make any mistakes (!), all of your work may be lost if you inadvertently quit MATLAB and have not taken the necessary steps to save the contents of the important program that you just finished developing. To preserve large sets of commands, you can store them in a special type of file called an *M-file*. + +MATLAB supports two types of *M-files*: *script* and *function M-files*. To hold a large collection of commands, we use a *script M-file*. The *function M-file* is discussed in Chapter 3. To make a *script M-file*, you need to open a file using the built-in MATLAB editor. For both Macs and PCs, first select New from the file menu. Then select the *M-file* entry from the pull-down menu. After typing the *M-file* contents, you need to save the file: + +For Macs and PCs, select the **save as** command from the file window. A field will pop up in which you can type in the name you have chosen for this file (make sure that you do not name a file by a mathematical abbreviation, the name of a mathematical function, or a number). Also make sure that the file name has a `.m` extension added at the end of its name. + +For Macs, save the file in a user's designated volume. + +For PCs, save the file in the default (bin) subdirectory. + +To run your *script M-file*, just type the filename (omitting the `.m` extension at its end) at the MATLAB prompt. + +**Example 1.7** + +For practice, go to your file edit window to create the following file that you name `myfile.m`. + +```matlab +clear, clf +x1=1;y1=.5;x2=2;y2=1.5;x3=3;y3=2; +plot(x1,y1,'o',x2,y2,'égé',x3,y3,'*') +axis([0 4 0 4]) +xlabel('xaxis') +ylabel('yaxis') +title('3 points in a plane') +``` + +After creating and saving `myfile.m`, go to the MATLAB command window and enter `myfile`. MATLAB will execute the instructions in the order of the statements stored in your `myfile.m` file. \ No newline at end of file diff --git a/samples/texts/348597/page_210.md b/samples/texts/348597/page_210.md new file mode 100644 index 0000000000000000000000000000000000000000..7d3374ee5fe9188652a223301ab9c289ed1994d5 --- /dev/null +++ b/samples/texts/348597/page_210.md @@ -0,0 +1,43 @@ +$$ +\cos(\gamma_m) = \frac{u_m}{||\vec{u}||} = \frac{\hat{e}_m \cdot \vec{u}}{||\vec{u}||} \qquad (7.21) +$$ + +In 2-D or 3-D, these quantities have the geometrical interpretation of being +the cosine of the angles that the vector $\vec{u}$ makes with the $x$, $y$, and $z$ axes. + +7.3.3 Projections + +The projection of a vector $\vec{u}$ over a vector $\vec{a}$ is a vector whose magnitude is +the dot product of the vector $\vec{u}$ with the unit vector in the direction of $\vec{a}$, +denoted by $\hat{e}_a$, and whose orientation is in the direction of $\hat{e}_a$: + +$$ +\operatorname{proj}_{\vec{a}}(\vec{u}) = (\vec{u} \cdot \hat{e}_a) \hat{e}_a = \frac{\vec{u} \cdot \vec{a}}{|\vec{a}|^2} \vec{a} = \frac{\vec{u} \cdot \vec{a}}{(\|\vec{a}\|)^2} \vec{a} \quad (7.22) +$$ + +The component of $\bar{u}$ that is perpendicular to $\bar{a}$ is obtained by subtracting +from $\bar{u}$ the projection vector of $\bar{u}$ over $\bar{a}$. + +MATLAB Example + +Assume that we have the vector $\vec{u} = \hat{e}_1 + 5\hat{e}_2 + 3\hat{e}_3 + 7\hat{e}_4$ and the vector $\vec{a} = 2\hat{e}_1 + 3\hat{e}_2 + \hat{e}_3 + 4\hat{e}_4$. We desire to obtain the components of each vector, the projection of $\vec{u}$ over $\vec{a}$, and the component of $\vec{u}$ orthogonal to $\vec{a}$. + +Type, execute, and interpret at each step, each of the following commands +using the above definitions: + +u=[1 5 3 7] +a=[2 3 1 4] +u(1) +a(2) +prjuovera=((u*a')/(norm(a)^2))*a +orthoutoa=u-prjuovera +prjuovera*orthoutoa' + +The last command should give you an answer that is zero, up to machine +round-up errors because the projection of $\vec{u}$ over $\vec{a}$ and the component of $\vec{u}$ +orthogonal to $\vec{a}$ are perpendicular. + +**7.4 The Dirac Notation and Some General Theorems*** + +Thus far, we have established some key practical results in real finite dimen- +sional vector spaces; namely: \ No newline at end of file diff --git a/samples/texts/348597/page_211.md b/samples/texts/348597/page_211.md new file mode 100644 index 0000000000000000000000000000000000000000..4213a7069ff8de706e36106c59ee2af8342c7248 --- /dev/null +++ b/samples/texts/348597/page_211.md @@ -0,0 +1,34 @@ +1. A vector can be decomposed into a linear combination of the basis vectors. + +2. The dot product of two vectors can be written as the multiplication of a row vector by a column vector, each of whose elements are the components of the respective vectors. + +3. The norm of a vector, a non-negative quantity, is the square root of the dot product of the vector with itself. + +4. The unit vector parallel to a specific vector is that vector divided by its norm. + +5. The projection of a vector on another can be deduced from the dot product of the two vectors. + +To facilitate the statement of these results in a notation that will be suitable for infinite-dimensional vector spaces (which is very briefly introduced in Section 7.7), Dirac in his elegant formulation of quantum mechanics introduced a simple notation that we now present. + +The Dirac notation represents the row vector by what he called the "bra-vector" and the column vector by what he called the "ket-vector," such that when a dot product is obtained by joining the two vectors, the result will be the scalar "bra-ket" quantity. Specifically: + +$$ \text{Column vector } \vec{u} \Rightarrow |u\rangle \qquad (7.23) $$ + +$$ \text{Row vector } \vec{v} \Rightarrow \langle v| \qquad (7.24) $$ + +$$ \text{Dot product } \vec{v} \cdot \vec{u} \Rightarrow \langle v|u\rangle \qquad (7.25) $$ + +The orthonormality of the basis vectors is written as: + +$$ \langle m | n \rangle = \delta_{m,n} \qquad (7.26) $$ + +where the basis vectors are referred to by their indices, and where $\delta_{m,n}$ is the Kroenecker delta, equal to 1 when its indices are equal, and zero otherwise. +The *norm* of a vector, a non-negative quantity, is given by: + +$$ (\operatorname{norm of} |u\rangle) = \|u\|^2 = \langle u|u \rangle \qquad (7.27) $$ + +The *Decomposition rule* is written as: + +$$ |u\rangle = \sum_n c_n |n\rangle \qquad (7.28) $$ + +where the components are obtained by multiplying Eq. (7.28) on the left by $\langle m|$. Using Eq. (7.26), we deduce: \ No newline at end of file diff --git a/samples/texts/348597/page_212.md b/samples/texts/348597/page_212.md new file mode 100644 index 0000000000000000000000000000000000000000..424ea0d7906cab3346281d19f8d025dbb09c6ecf --- /dev/null +++ b/samples/texts/348597/page_212.md @@ -0,0 +1,58 @@ +$$ +\langle m | u \rangle = \sum_n c_n \langle m | n \rangle = \sum_n c_n \delta_{m,n} = c_m \quad (7.29) +$$ + +Next, using the Dirac notation, we present the proofs of two key theorems of vector algebra: the Cauchy-Schwartz inequality and the triangle inequality. + +7.4.1 Cauchy-Schwartz Inequality + +Let $|u\rangle$ and $|v\rangle$ be any non-zero vectors; then: + +$$ +|\langle u | v \rangle|^2 \leq \langle u | u \rangle \langle v | v \rangle \tag{7.30} +$$ + +PROOF Let $\epsilon = \pm 1$, ($\epsilon^2 = 1$); then + +$$ +\langle u | v \rangle = \epsilon \langle u | v \rangle \quad \text{such that} \quad \begin{cases} \epsilon = 1 & \text{if } \langle u | v \rangle \ge 0 \\ \epsilon = -1 & \text{if } \langle u | v \rangle \le 0 \end{cases} \tag{7.31} +$$ + +Now, consider the ket $|\epsilon u + tv\rangle$; its norm is always non-negative. Computing this norm square, we obtain: + +$$ +\begin{align*} +\langle \epsilon u + tv | \epsilon u + tv \rangle &= \epsilon^2 \langle u | u \rangle + \epsilon t \langle u | v \rangle + t \epsilon \langle v | u \rangle + t^2 \langle v | v \rangle \\ +&= \langle u | u \rangle + 2\epsilon t \langle u | v \rangle + t^2 \langle v | v \rangle \tag{7.32} \\ +&= \langle u | u \rangle + 2t |\langle u | v \rangle| + t^2 \langle v | v \rangle +\end{align*} +$$ + +The RHS of this quantity is a positive quadratic polynomial in $t$, and can be +written in the standard form: + +$$ +at^2 + bt + c \geq 0 +\quad +(7.33) +$$ + +The non-negativity of this quadratic polynomial means that it can have at most +one real root. This means that the discriminant must satisfy the inequality: + +$$ +b^2 - 4ac \le 0 +\quad (7.34) +$$ + +Replacing *a*, *b*, *c* by their values from Eq. (7.32), we obtain: + +$$ +4 |\langle u | v \rangle|^2 - 4 \langle u | u \rangle \langle v | v \rangle \le 0 +\quad (7.35) +$$ + +$$ +\Rightarrow |\langle u | v \rangle|^2 \leq \langle u | u \rangle \langle v | v \rangle +\quad (7.36) +$$ \ No newline at end of file diff --git a/samples/texts/348597/page_213.md b/samples/texts/348597/page_213.md new file mode 100644 index 0000000000000000000000000000000000000000..cba91aa8559c98ceb8556c332e5ed8d4bba08ab2 --- /dev/null +++ b/samples/texts/348597/page_213.md @@ -0,0 +1,48 @@ +which is the desired result. Note that the equality holds if and only if the two +vectors are linearly dependent (i.e., one vector is equal to a scalar multiplied +by the other vector). + +**Example 7.7** + +Show that for any three non-zero numbers, *u*₁, *u*₂, and *u*₃, the following inequality always holds: + +$$ +9 \le (u_1 + u_2 + u_3) \left( \frac{1}{u_1} + \frac{1}{u_2} + \frac{1}{u_3} \right) \qquad (7.37) +$$ + +PROOF Choose the vectors $|v\rangle$ and $|w\rangle$ such that: + +$$ +|v\rangle = |u_1^{1/2}, u_2^{1/2}, u_3^{1/2}\rangle \tag{7.38} +$$ + +$$ +|w\rangle = \left| \left(\frac{1}{u_1}\right)^{1/2}, \left(\frac{1}{u_2}\right)^{1/2}, \left(\frac{1}{u_3}\right)^{1/2} \right\rangle \quad (7.39) +$$ + +then: + +$$ +\langle v | w \rangle = 3 \tag{7.40} +$$ + +$$ +\langle v | v \rangle = (u_1 + u_2 + u_3) \tag{7.41} +$$ + +$$ +\langle w | w \rangle = \left( \frac{1}{u_1} + \frac{1}{u_2} + \frac{1}{u_3} \right) \tag{7.42} +$$ + +Applying the Cauchy-Schwartz inequality in Eq. (7.36) establishes the +desired result. The above inequality can be trivially generalized to n-ele- +ments, which leads to the following important result for the equivalent resis- +tance for resistors all in series or all in parallel. + +**Application** + +The equivalent resistance of *n*-resistors all in series and the equivalent resistance of the same *n*-resistors all in parallel obey the relation: + +$$ +n^2 \le \frac{R_{\text{series}}}{R_{\text{parallel}}} \qquad (7.43) +$$ \ No newline at end of file diff --git a/samples/texts/348597/page_214.md b/samples/texts/348597/page_214.md new file mode 100644 index 0000000000000000000000000000000000000000..e869cecea181d901b7e1e61f6f17a4c77ca65750 --- /dev/null +++ b/samples/texts/348597/page_214.md @@ -0,0 +1,34 @@ +PROOF The proof is straightforward. Using Eq. (7.37) and recalling Ohm's law for n resistors {$R_1, R_2, ..., R_n$}, the equivalent resistances for this combination, when all resistors are in series or are all in parallel, are given respectively by: + +$$R_{series} = R_1 + R_2 + ... + R_n \quad (7.44)$$ + +and + +$$\frac{1}{R_{parallel}} = \frac{1}{R_1} + \frac{1}{R_2} + ... + \frac{1}{R_n} \quad (7.45)$$ + +**Question:** Can you derive a similar theorem for capacitors all in series and all in parallel? (Remember that the equivalent capacitance law is different for capacitors than for resistors.) + +### 7.4.2 Triangle Inequality + +This is, as the name implies, a generalization of a theorem from Euclidean geometry in 2-D that states that the length of one side of a triangle is smaller or equal to the sum of the other two sides. Its generalization is + +$$\|u + v\| \le \|u\| + \|v\| \quad (7.46)$$ + +PROOF Using the relation between the norm and the dot product, we have: + +$$ +\begin{align} +\|u+v\|^2 &= \langle u+v | u+v \rangle = \langle u|v \rangle + 2\langle u|v \rangle + \langle v|v \rangle \tag{7.47} \\ +&= \|u\|^2 + 2\langle u|v \rangle + \|v\|^2 \le \|u\|^2 + 2\langle u|v \rangle + \|v\|^2 +\end{align} +$$ + +Using the Cauchy-Schwartz inequality for the dot product appearing in the previous inequality, we deduce that: + +$$\|u+v\|^2 \leq \|u\|^2 + 2\|u\|\|v\| + \|v\|^2 = (\|u\| + \|v\|)^2 \quad (7.48)$$ + +which establishes the theorem. + +## Homework Problems + +Pb. 7.5 Using the Dirac notation, generalize to n-dimensions the 2-D geometry Parallelogram theorem, which states that: *The sum of the squares of the diagonals of a parallelogram is equal to twice the sum of the squares of the side; or that:* \ No newline at end of file diff --git a/samples/texts/348597/page_215.md b/samples/texts/348597/page_215.md new file mode 100644 index 0000000000000000000000000000000000000000..7a27670d8fd54d0a12d7aac8f41b634be994605d --- /dev/null +++ b/samples/texts/348597/page_215.md @@ -0,0 +1,49 @@ +$$ +\| \vec{u} + \vec{v} \|^2 + \| \vec{u} - \vec{v} \|^2 = 2 \| \vec{u} \|^2 + 2 \| \vec{v} \|^2 +$$ + +**Pb. 7.6** Referring to the inequality of Eq. (7.43), which relates the equivalent resistances of *n*-resistors in series and in parallel, under what conditions does the equality hold? + +## 7.5 Cross Product and Scalar Triple Product* + +In this section and in Sections 7.6 and 7.7, we restrict our discussions to vectors +in a 3-D space, and use the more familiar conventional vector notation. + +### 7.5.1 Cross Product + +*DEFINITION* If two vectors are given by $\vec{u} = (u_1, u_2, u_3)$ and $\vec{v} = (v_1, v_2, v_3)$ then their cross product, denoted by $\vec{u} \times \vec{v}$, is a vector given by: + +$$ +\vec{u} \times \vec{v} = (u_2 v_3 - u_3 v_2, u_3 v_1 - u_1 v_3, u_1 v_2 - u_2 v_1) \quad (7.49) +$$ + +By simple substitution, we can infer the following properties for the cross +product as summarized in the preparatory exercises below. + +Preparatory Exercises + +**Pb. 7.7** Show, using the above definition for the cross product, that: + +$$ +\textbf{a.} \quad \vec{u} \cdot (\vec{u} \times \vec{v}) = \vec{v} \cdot (\vec{u} \times \vec{v}) = 0 \Rightarrow \vec{u} \times \vec{v} \text{ is orthogonal to both } \vec{u} \text{ and } \vec{v} +$$ + +$$ +\textbf{b.} \quad \|\vec{u} \times \vec{v}\|^2 = \|\vec{u}\|^2 \|\vec{v}\|^2 - (\vec{u} \cdot \vec{v})^2 \quad \text{Called the Lagrange Identity} +$$ + +$$ +c. \quad \vec{u} \times \vec{v} = -(\vec{v} \times \vec{u}) \text{ Noncommutativity} +$$ + +d. $\bar{\bar{u}} \times (\bar{\bar{v}} + \bar{\bar{w}}) = \bar{\bar{u}} \times \bar{\bar{v}} + \bar{\bar{u}} \times \bar{\bar{w}}$ Distributive property + +e. $k(\vec{u} \times \vec{v}) = (\bar{k}\bar{\bar{u}}) \times \vec{v} = \bar{\bar{u}} \times (\bar{k}\bar{\bar{v}})$ + +$$ +f. \quad \vec{u} \times \vec{0} = \vec{0} +$$ + +g. $\vec{u} \times \vec{u} = \vec{0}$ + +**Pb. 7.8** Verify the following relations for the basis unit vectors: \ No newline at end of file diff --git a/samples/texts/348597/page_216.md b/samples/texts/348597/page_216.md new file mode 100644 index 0000000000000000000000000000000000000000..52460b1e2c9f587cd74bf3b89f0ade53fac0a3ba --- /dev/null +++ b/samples/texts/348597/page_216.md @@ -0,0 +1,31 @@ +$$ \hat{e}_1 \times \hat{e}_2 = \hat{e}_3; \quad \hat{e}_2 \times \hat{e}_3 = \hat{e}_1; \quad \hat{e}_3 \times \hat{e}_1 = \hat{e}_2 $$ + +**Pb. 7.9** Ask your instructor to show you how the Right Hand rule is used to determine the direction of a vector equal to the cross product of two other vectors. + +### 7.5.2 Geometric Interpretation of the Cross Product + +As noted in Pb. 7.7a, the cross product is a vector that is perpendicular to its two constituents. This determines the resultant vector's direction. To determine its magnitude, consider the Lagrange Identity. If the angle between $\vec{u}$ and $\vec{v}$ is $\theta$, then: + +$$ |\vec{u} \times \vec{v}|^2 = |\vec{u}|^2 |\vec{v}|^2 - |\vec{u}|^2 |\vec{v}|^2 \cos^2(\theta) \qquad (7.50) $$ + +and + +$$ |\vec{u} \times \vec{v}| = |\vec{u}| |\vec{v}| \sin(\theta) \qquad (7.51) $$ + +that is, the magnitude of the cross product of two vectors is the area of the parallelogram formed by these vectors. + +### 7.5.3 Scalar Triple Product + +**DEFINITION** If $\vec{u}, \vec{v}$, and $\vec{w}$ are vectors in 3-D, then $\vec{u} \cdot (\vec{v} \times \vec{w})$ is called the scalar triple product of $\vec{u}, \vec{v}$, and $\vec{w}$. + +**PROPERTY** + +$$ \vec{u} \cdot (\vec{v} \times \vec{w}) = \vec{v} \cdot (\vec{w} \times \vec{u}) = \vec{w} \cdot (\vec{u} \times \vec{v}) \qquad (7.52) $$ + +This property can be trivially proven by writing out the components expansions of the three quantities. + +#### 7.5.3.1 Geometric Interpretation of the Scalar Triple Product + +If the vectors' $\vec{u}, \vec{v}$, and $\vec{w}$ original points are brought to the same origin, these three vectors define a parallelepiped. The absolute value of the scalar triple product can then be interpreted as the volume of this parallelepiped. We have shown earlier that $\vec{v} \times \vec{w}$ is a vector that is perpendicular to both $\vec{v}$ and $\vec{w}$, + +© 2001 by CRC Press LLC \ No newline at end of file diff --git a/samples/texts/348597/page_218.md b/samples/texts/348597/page_218.md new file mode 100644 index 0000000000000000000000000000000000000000..efde9b52d2c162c0b35b371a6ac82d72a391c42e --- /dev/null +++ b/samples/texts/348597/page_218.md @@ -0,0 +1,37 @@ +**Pb. 7.11** Find two unit vectors that are orthogonal to both vectors given by: + +$$ \vec{a} = (2,-1,2) \text{ and } \vec{b} = (1,2,-3) $$ + +**Pb. 7.12** Find the area of the triangle with vertices at the points: + +$A(0,-1,1)$, $B(3,1,0)$, and $C(-2,0,2)$ + +**Pb. 7.13** Find the volume of the parallelepiped formed by the three vectors: + +$$ \vec{u} = (1,2,0), \quad \vec{v} = (0,3,0), \quad \vec{w} = (1,2,3) $$ + +**Pb. 7.14** Determine the equation of a plane that passes through the point $(1, 1, 1)$ and is normal to the vector $(2, 1, 2)$. + +**Pb. 7.15** Find the angle of intersection of the planes: + +$$ x + y - z = 0 \text{ and } x - 3y + z - 1 = 0 $$ + +**Pb. 7.16** Find the distance between the point $(3, 1, -2)$ and the plane $z = 2x - 3y$. + +**Pb. 7.17** Find the equation of the line that contains the point $(3, 2, 1)$ and is perpendicular to the plane $x + 2y - 2z = 2$. Write the parametric equation for this line. + +**Pb. 7.18** Find the point of intersection of the plane $2x - 3y + z = 6$ and the line + +$$ \frac{x-1}{3} = \frac{y+1}{1} = \frac{z-2}{2} $$ + +**Pb. 7.19** Show that the points $(1, 5)$, $(3, 11)$, and $(5, 17)$ are collinear. + +**Pb. 7.20** Show that the three vectors $\vec{u}, \vec{v}$, and $\vec{w}$ are coplanar: + +$$ \vec{u} = (2,3,5); \quad \vec{v} = (2,8,1); \quad \vec{w} = (8,22,12) $$ + +**Pb. 7.21** Find the unit vector normal to the plane determined by the points $(0, 0, 1)$, $(0, 1, 0)$, and $(1, 0, 0)$. + +## *Homework Problem* + +**Pb. 7.22** Determine the tetrahedron with the largest surface area whose vertices $P_0$, $P_1$, $P_2$, and $P_3$ are on the unit sphere $x^2 + y^2 + z^2 = 1$. \ No newline at end of file diff --git a/samples/texts/348597/page_219.md b/samples/texts/348597/page_219.md new file mode 100644 index 0000000000000000000000000000000000000000..f167518865e1ea03286cb8358350b0f34b5d0544 --- /dev/null +++ b/samples/texts/348597/page_219.md @@ -0,0 +1,23 @@ +(Hints: (1) Designate the point $P_0$ as north pole and confine $P_1$ to the zero meridian. With this choice, the coordinates of the vertices are given by: + +$$P_0 = (\theta_0 = \pi / 2, \phi_0 = 0)$$ + +$$P_1 = (\theta_1, \phi_1 = 0)$$ + +$$P_2 = (\theta_2, \phi_2)$$ + +$$P_3 = (\theta_3, \phi_3)$$ + +(2) From symmetry, the optimal tetrahedron will have a base ($P_1, P_2, P_3$) that is an equilateral triangle in a plane parallel to the equatorial plane. The latitude of ($P_1, P_2, P_3$) is $\theta$, while their longitudes are $(0, 2\pi/3, -2\pi/3)$, respectively. (3) The area of the tetrahedron is the sum of the areas of the four triangles (012), (023), (031), (123), where we are indicating each point by its subscript. (4) Express the area as function of $\theta$. Find the value of $\theta$ that maximizes this quantity.) + +## 7.6 Vector Valued Functions + +As you may recall, in Chapter 1 we described curves in 2-D and 3-D by parametric equations. Essentially, we gave each of the coordinates as a function of a parameter. In effect, we generated a vector valued function because the position of the point describing the curve can be written as: + +$$\vec{R}(t) = x(t)\hat{e}_1 + y(t)\hat{e}_2 + z(t)\hat{e}_3 \quad (7.53)$$ + +If the parameter $t$ was chosen to be time, then the tip of the vector $\vec{R}(t)$ would be the position of a point on that curve as a function of time. In mechanics, finding $\vec{R}(t)$ is ultimately the goal of any problem in the dynamics of a point particle. In many problems of electrical engineering design of tubes and other microwave engineering devices, we need to determine the position of electrons whose motion we control by a variety of electrical and magnetic fields geometries. The following are the kinematics variables of the problem. The dynamics form the subject of mechanics courses. + +To help visualize the shape of a curve generated by the tip of the position vector $\vec{R}(t)$, we introduce the tangent vector and the normal vector to the curve and the curvature of the curve. + +The velocity vector field associated with the above position vector is defined through: \ No newline at end of file diff --git a/samples/texts/348597/page_22.md b/samples/texts/348597/page_22.md new file mode 100644 index 0000000000000000000000000000000000000000..cf10b8ecf9949c6c82168915a7a964bd9397bca2 --- /dev/null +++ b/samples/texts/348597/page_22.md @@ -0,0 +1,33 @@ +# 1.5 MATLAB Simple Programming + +## 1.5.1 Iterative Loops + +The power of computers lies in their ability to perform a large number of repetitive calculations. To do this without entering the value of a parameter or variable each time that these are changed, all computer languages have control structures that allow commands to be performed and controlled by counter variables, and MATLAB is no different. For example, the MATLAB "for" loop allows a statement or a group of statements to be repeated. + +**Example 1.8** + +Generate the square of the first ten integers. + +*Solution:* Edit and execute the following script M-file: + +```matlab +for m=1:10 + x(m)=m^2; +end; +``` + +In this case, the number of repetitions is controlled by the index variable *m*, which takes on the values *m* = 1 through *m* = 10 in intervals of 1. Therefore, ten assignments were made. What the above loop is doing is sequentially assigning the different values of *m*^2 (i.e., *m*^2) in each element of the "x-array." An array is just a data structure that can hold multiple entries. An array can be 1-D such as in a vector, or 2-D such as in a matrix. More will be said about vectors and matrices in subsequent chapters. At this time, think of the 1-D and 2-D arrays as pigeonholes with numbers or ordered pair of numbers respectively assigned to them. + +To find the value of a particular slot of the array, such as slot 3, enter: + +`x(3)` + +To read all the values stored in the array, type: + +`x` + +*Question:* What do you get if you enter *m*? + +## 1.5.2 If-Else-End Structures + +If a sequence of commands must be conditionally evaluated based on a relational test, the programming of this logical relationship is executed with some variation of an `if-else-end` structure. \ No newline at end of file diff --git a/samples/texts/348597/page_222.md b/samples/texts/348597/page_222.md new file mode 100644 index 0000000000000000000000000000000000000000..0b597f7e8063d0596227f13b148555aef70d545a --- /dev/null +++ b/samples/texts/348597/page_222.md @@ -0,0 +1,29 @@ +**Pb. 7.24** Using the parametric equations for an ellipse given in Example 1.13, find the curvature of the ellipse as function of t. + +a. At what points is the curvature a minimum, and at what points is it a maximum? + +b. What does the velocity do at the points of minimum and maximum curvature? + +c. On what dates of the year does the planet Earth pass through these points on its trajectory around the sun? + +## 7.7 Line Integral + +As you may have already learned in an elementary physics course: if a force $\vec{F}$ is applied to a particle that moves by an infinitesimal distance $\Delta\vec{l}$, then the infinitesimal work done by the force on the particle is the scalar product of the force by the displacement; that is, + +$$ \Delta W = \vec{F} \cdot \Delta\vec{l} \qquad (7.58) $$ + +Now, to calculate the work done when the particle moves along a curve C, located in a plane, we need to define the concept of a line integral. + +Suppose that the curve is described parametrically [i.e., $x(t)$ and $y(t)$ are given]. Furthermore, suppose that the vector field representing the force is given by: + +$$ \vec{F} = P(x, y)\hat{e}_x + Q(x, y)\hat{e}_y \qquad (7.59) $$ + +The displacement element is given by: + +$$ \Delta l = \Delta x \hat{e}_x + \Delta y \hat{e}_y \qquad (7.60) $$ + +The infinitesimal element of work, which is the dot product of the above two quantities, can then be written as: + +$$ \Delta W = P \Delta x + Q \Delta y \qquad (7.61) $$ + +This expression can be simplified if the curve is written in parametric form. Assuming the parameter is *t*, then $\Delta W$ can be written as a function of the single parameter *t*: \ No newline at end of file diff --git a/samples/texts/348597/page_224.md b/samples/texts/348597/page_224.md new file mode 100644 index 0000000000000000000000000000000000000000..7a5e90f777cfd5633cf27424dd410eff504fd5d5 --- /dev/null +++ b/samples/texts/348597/page_224.md @@ -0,0 +1,48 @@ +complex numbers rather than real numbers, as we have restricted ourselves +thus far. Using these ideas, we discuss, in a very preliminary fashion, Fourier +series and Legendre polynomials. + +We use the Dirac notation to stress the commonalties that unite the finite- +and infinite-dimensional vector spaces. We, at this level, sacrifice the mathe- +matical rigor for the sake of simplicity, and even commit a few sins in our +treatment of limits. A more formal and rigorous treatment of this subject can +be found in many books on functional analysis, to which we refer the inter- +ested reader for further details. + +A Hilbert space is much the same type of mathematical object as the vector +spaces that you have been introduced to in the preceding sections of this +chapter. Its elements are functions, instead of n-dimensional vectors. It is infi- +nite-dimensional because the function has a value, say a component, at each +point in space, and space is continuous with an infinite number of points. + +The Hilbert space has the following properties: + +1. The space is linear under the two conditions that: + +a. If *a* is a constant and |**φ**⟩ is any element in the space, then *a*|**ψ**⟩ +is also an element of the space; and + +b. If *a* and *b* are constants, and |**φ**⟩ and |**ψ**⟩ are elements belonging to the space, then *a*|**φ**⟩ + *b*|**ψ**⟩ is also an element of the space. + +2. There is an inner (dot) product for any two elements in the space. +The definition adopted here for this inner product for functions +defined in the interval $t_{\min} \le t \le t_{\max}$ is: + +$$ +\langle \psi | \phi \rangle = \int_{t_{\min}}^{t_{\max}} \bar{\psi}(t) \phi(t) dt \quad (7.64) +$$ + +3. Any element of the space has a norm (“length”) that is positive and related to the inner product as follows: + +$$ +\|\varphi\|^2 = \langle \varphi | \varphi \rangle = \int_{t_{\min}}^{t_{\max}} \bar{\varphi}(t)\varphi(t)dt \quad (7.65) +$$ + +Note that the requirement for the positivity of a norm is that +which necessitated the complex conjugation in the definition of +the bra-vector. + +4. The Hilbert space is complete; or loosely speaking, the Hilbert space contains all its limit points. This condition is too technical and will not be further discussed here. + +In this Hilbert space, we define similar concepts to those in finite-dimen- +sional vector spaces: \ No newline at end of file diff --git a/samples/texts/348597/page_226.md b/samples/texts/348597/page_226.md new file mode 100644 index 0000000000000000000000000000000000000000..3872357428688764b7541cf29f2f6ce324def208 --- /dev/null +++ b/samples/texts/348597/page_226.md @@ -0,0 +1,31 @@ +Basis: + +$$|u_n\rangle = \exp(j2\pi nt) \quad \text{and} \quad \langle u_n| = \exp(-j2\pi nt) \qquad (7.71)$$ + +Orthonormality of the basis vectors: + +$$\langle u_m | u_n \rangle = \int_{-1/2}^{1/2} \exp(-j2\pi mt) \exp(j2\pi nt) dt = \begin{cases} 1 & \text{if } m=n \\ 0 & \text{if } m \neq n \end{cases} \qquad (7.72)$$ + +Decomposition rule: + +$$|\varphi\rangle = \sum_{n=-\infty}^{\infty} c_n |u_n\rangle = \sum_{n=-\infty}^{\infty} c_n \exp(j2\pi nt) \qquad (7.73)$$ + +where + +$$c_n = \langle u_n | \varphi \rangle = \int_{-1/2}^{1/2} \exp(-j2\pi nt)\varphi(t)dt \qquad (7.74)$$ + +Parseval's identity: + +$$\|\varphi\|^2 = \langle\varphi|\varphi\rangle = \int_{-1/2}^{1/2} \bar{\varphi}(t)\varphi(t)dt = \int_{-1/2}^{1/2} |\varphi(t)|^2 dt = \sum_{n=-\infty}^{\infty} |c_n|^2 \qquad (7.75)$$ + +**Example 7.9** + +Derive the analytic expression for the potential difference across the capacitor in the RLC circuit of Figure 4.5 if the temporal profile of the source potential is a periodic function, of period 1, in some normalized units. + +**Solution:** + +1. Because the potential is periodic with period 1, it can be expanded using Eq. (7.73) in a Fourier series with basis functions $\{e^{j2\pi nt}\}$: + +$$V_s(t) = \operatorname{Re} \left\{ \sum_n \tilde{V}_s^n e^{j 2 \pi n t} \right\} \qquad (7.76)$$ + +where $\tilde{V}_s^n$ is the phasor associated with the frequency mode (2$\pi n$). (Note that $n$ in the expressions for the phasors is a superscript and not a power.) \ No newline at end of file diff --git a/samples/texts/348597/page_227.md b/samples/texts/348597/page_227.md new file mode 100644 index 0000000000000000000000000000000000000000..d4dc600e0ca05cf3339442eeeba62c0aa5505b21 --- /dev/null +++ b/samples/texts/348597/page_227.md @@ -0,0 +1,27 @@ +2. We find $\tilde{V}_c^n$, the capacitor response phasor associated with the $\tilde{V}_s^n$ excitation. This can be found by noting that the voltage across the capacitor is equal to the capacitor impedance multiplied by the current phasor, giving: + +$$ \tilde{V}_c^n = Z_c^n \tilde{I}^n = \frac{Z_c^n \tilde{V}_s^n}{Z_c^n + \tilde{Z}_R^n + \tilde{Z}_L^n} \quad (7.77) $$ + +where from the results of Section 6.8, particularly Eqs. (6.83) through (6.85), we have: + +$$ Z_c^n = \frac{1}{j2\pi nC} \quad (7.78) $$ + +$$ Z_L^n = j2\pi nL \quad (7.79) $$ + +$$ Z_R^n = R \quad (7.80) $$ + +3. Finally, we use the linearity of the ODE system and write the solution as the linear superposition of the solutions corresponding to the response to each of the basis functions; that is, + +$$ V_c(t) = \operatorname{Re} \left\{ \sum_n \frac{Z_c^n \tilde{V}_s^n}{Z_c^n + Z_R^n + Z_L^n} e^{j2\pi nt} \right\} \quad (7.81) $$ + +leading to the expression: + +$$ V_c(t) = \operatorname{Re} \left\{ \sum_n \frac{\tilde{V}_s^n}{1 - (2\pi n)^2 LC + j(2\pi n)RC} e^{j2\pi nt} \right\} \quad (7.82) $$ + +## *Homework Problem* + +**Pb. 7.27** Consider the RLC circuit. Assuming the same notation as in Section 6.5.3, but now assume that the source potential is given by: + +$$ V_s = V_0 \cos^6(\omega t) $$ + +a. Find analytically the potential difference across the capacitance. (Hint: Write the power of the trigonometric function as a function of the different multiples of the angle.) \ No newline at end of file diff --git a/samples/texts/348597/page_228.md b/samples/texts/348597/page_228.md new file mode 100644 index 0000000000000000000000000000000000000000..70f49073676962a809e81ba362bd896e334bc6ff --- /dev/null +++ b/samples/texts/348597/page_228.md @@ -0,0 +1,27 @@ +b. Find numerically the steady-state solution to this problem using the techniques of Chapter 4, and assume for some normalized units the following values for the parameters: + +$$LC = 1, RC = 1, \omega = 2\pi$$ + +c. Compare your numerical results with the analytical results. + +**Application 2: The Legendre Polynomials** + +We propose to show that the Legendre polynomials are an orthonormal basis for all functions of compact support over the interval $-1 \le x \le 1$. Thus far, we have encountered the Legendre polynomials twice before. They were defined through their recursion relations in **Pb. 2.25**, and in Section 4.7.1 through their defining ODE. In this application, we define the Legendre polynomials through their generating function; show how their definitions through their recursion relation, or through their ODE, can be deduced from their definition through their generating function; and show that they constitute an orthonormal basis for functions defined on the interval $-1 \le x \le 1$. + +1. The generating function for the Legendre polynomials is given by the simple form: + +$$G(x,t) = \frac{1}{\sqrt{1 - 2xt + t^2}} = \sum_{l=0}^{\infty} P_l(x)t^l \quad (7.83)$$ + +2. The lowest orders of $P_l(x)$ can be obtained from the small $t$-expansion of $G(x, t)$; therefore, expanding Eq. (7.83) to first order in $t$ gives: + +$$1 + xt + O(t^2) = P_0(x) + tP_1(x) + O(t^2) \quad (7.84)$$ + +from which, we can deduce that: + +$$P_0(x) = 1 \quad (7.85)$$ + +$$P_1(x) = x \quad (7.86)$$ + +3. By inspection, it is straightforward to verify by substitution that the generating function satisfies the equation: + +$$(1 - 2xt + t^2) \frac{\partial G}{\partial t} + (t - x)G = 0 \quad (7.87)$$ \ No newline at end of file diff --git a/samples/texts/348597/page_229.md b/samples/texts/348597/page_229.md new file mode 100644 index 0000000000000000000000000000000000000000..b0a62af5b2cb839b7abc75370b894f42307a60a4 --- /dev/null +++ b/samples/texts/348597/page_229.md @@ -0,0 +1,45 @@ +Because power series can be differentiated term by term, Eq. (7.87) +gives: + +$$ +(1 - 2xt + t^2) \sum_{l=0}^{\infty} l P_l(x) t^{l-1} + (t-x) \sum_{l=0}^{\infty} P_l(x) t^l = 0 \quad (7.88) +$$ + +Since this equation should hold true for all values of *t*, this means +that all coefficients of any power of *t* should be zero; therefore: + +$$ +(l+1)P_l(x) - 2lxP_l(x) + (l-1)P_{l-1}(x) + P_{l-1}(x) - xP_l(x) = 0 \quad (7.89) +$$ + +or collecting terms, this can be written as: + +$$ +(l + 1)P_l(x) - (2l + 1)xP_l(x) + lP_{l-1}(x) = 0 \quad (7.90) +$$ + +This is the recursion relation of Pb. 2.25. + +4. By substitution in the explicit expression of the generating function, we can also verify that: + +$$ +(1 - 2xt + t^2) \frac{\partial G}{\partial x} - tG = 0 \quad (7.91) +$$ + +which leads to: + +$$ +(1 - 2xt + t^2) \sum_{l=0}^{\infty} \frac{dP_l(x)}{dx} - \sum_{l=0}^{\infty} P_l(x)t^{l+1} = 0 \quad (7.92) +$$ + +Again, looking at the coefficients of the same power of $t$ permits +us to obtain another recursion relation: + +$$ +\frac{dP_{l+1}(x)}{dx} - 2x \frac{dP_l(x)}{dx} + \frac{dP_{l-1}(x)}{dx} - P_l(x) = 0 \quad (7.93) +$$ + +Differentiating Eq. (7.90), we first eliminate $\frac{dP_{l-1}(x)}{dx}$ and then + +$\frac{dP_l(x)}{dx}$ from the resulting equation, and use Eq. (7.93) to obtain +two new recursion relations: \ No newline at end of file diff --git a/samples/texts/348597/page_23.md b/samples/texts/348597/page_23.md new file mode 100644 index 0000000000000000000000000000000000000000..7061107ac49024c480f39c7e32dacfa40564b3aa --- /dev/null +++ b/samples/texts/348597/page_23.md @@ -0,0 +1,45 @@ +A. The simplest form of this structure is: + +**if** expression +commands evaluated if expression is True + +**else** +commands evaluated if expression is False + +**end** + +NOTES + +1. The commands between the **if** and **else** statements are evaluated if all elements in the expression are true. + +2. The conditional expression uses the Boolean logical symbols & (and), | (or), and ~ (not) to connect different propositions. + +**Example 1.9** + +Find for integer $0 < a \le 10$, the values of C, defined as follows: + +$$C = \begin{cases} \frac{3}{2} & ab \text{ for } a > 5 \\ \frac{3}{2} & ab \text{ for } a \le 5 \end{cases}$$ + +and $b = 15$. + +Solution: Edit and execute the following *script M-file*: + +``` +for a=1:10 +b=15; +if a>5 +C(a)=a*b; +else +C(a)=(a*b)*(3/2); +end +end +``` + +Check that the values of C that you obtain by typing C are: + +`22.5 45 67.5 90 112.50 90 105 120 135 150` + +B. When there are three or more alternatives, the `if-else-end` structure takes the form: + +**if** expression 1 +Commands 1 evaluated if expression 1 is True \ No newline at end of file diff --git a/samples/texts/348597/page_230.md b/samples/texts/348597/page_230.md new file mode 100644 index 0000000000000000000000000000000000000000..44762ad41ff56e04ebef12f6b3ffb5c3ecd72885 --- /dev/null +++ b/samples/texts/348597/page_230.md @@ -0,0 +1,29 @@ +$$ \frac{dP_{l+1}(x)}{dx} - x \frac{dP_l(x)}{dx} = (l+1)P_l(x) \quad (7.94) $$ + +and + +$$ x \frac{dP_l(x)}{dx} - \frac{dP_{l-1}(x)}{dx} = l P_l(x) \quad (7.95) $$ + +Adding Eqs. (7.94) and (7.95), we obtain the more symmetric formula: + +$$ \frac{dP_{l+1}(x)}{dx} - \frac{dP_{l-1}(x)}{dx} = (2l + 1)P_l(x) \quad (7.96) $$ + +Replacing $l$ by $l-1$ in Eq. (7.94) and eliminating $P'_{l-1}(x)$ from Eq. (7.95), we find that: + +$$ (1-x^2) \frac{dP_l(x)}{dx} = l P_{l-1}(x) - l x P_l(x) \quad (7.97) $$ + +Differentiating Eq. (7.97) and using Eq. (7.95), we obtain: + +$$ \frac{d}{dx} \left[ (1-x^2) \frac{dP_l(x)}{dx} \right] + l(l+1)P_l(x) = 0 \quad (7.98a) $$ + +which can be written in the equivalent form: + +$$ (1 - x^2) \frac{d^2 P_l(x)}{dx^2} - 2x \frac{dP_l(x)}{dx} + l(l+1)P_l(x) = 0 \quad (7.98b) $$ + +which is the ODE for the Legendre polynomial, as previously pointed out in Section 4.7.1. + +5. Next, we want to show that if $l \neq m$, we have the orthogonality between any two elements (with different indices) of the basis; that is + +$$ \int_{-1}^{1} P_l(x) P_m(x) dx = 0 \quad (7.99) $$ + +To show this relation, we multiply Eq. (7.98) on the left by $P_m(x)$ and integrate to obtain: \ No newline at end of file diff --git a/samples/texts/348597/page_232.md b/samples/texts/348597/page_232.md new file mode 100644 index 0000000000000000000000000000000000000000..2a586d92c1c465488a4545d5e6c5622aba7fddad --- /dev/null +++ b/samples/texts/348597/page_232.md @@ -0,0 +1,31 @@ +$$ \int_{-1}^{1} P_l^2(x) dx = \frac{(2l-1)}{(2l+1)} \int_{-1}^{1} P_{l-1}^2(x) dx \quad (7.107) $$ + +Repeated applications of this formula and the use of Eq. (7.86) yields: + +$$ \int_{-1}^{1} P_l^2(x) dx = \frac{3}{(2l+1)} \int_{-1}^{1} P_1^2(x) dx = \frac{2}{(2l+1)} \quad (7.108) $$ + +Direct calculations show that this is also valid for $l = 0$ and $l = 1$. Therefore, the orthonormal basis functions are given by: + +$$ |u_l\rangle = \sqrt{l+\frac{1}{2}} P_l(x) \quad (7.109) $$ + +The general theorem that summarizes the decomposition of a function into the Legendre polynomials basis states: + +**THEOREM** + +If the real function $f(x)$ defined over the interval $[-1, 1]$ is piecewise smooth and if the integral $\int_{-1}^{1} f^2(x)dx < \infty$, then the series: + +$$ f(x) = \sum_{l=0}^{\infty} c_l P_l(x) \quad (7.110) $$ + +where + +$$ c_l = \left(l + \frac{1}{2}\right) \int_{-1}^{1} f(x) P_l(x) dx \quad (7.111) $$ + +converges to $f(x)$ at every continuity point of the function. + +The proof of this theorem is not given here. + +**Example 7.10** + +Find the decomposition into Legendre polynomials of the following function: + +$$ f(x) = \begin{cases} 0 & \text{for } -1 \le x \le a \\ 1 & \text{for } a < x \le 1 \end{cases} \quad (7.112) $$ \ No newline at end of file diff --git a/samples/texts/348597/page_234.md b/samples/texts/348597/page_234.md new file mode 100644 index 0000000000000000000000000000000000000000..1a706dea8caafeb87c65dafa56a11eff625da6bc --- /dev/null +++ b/samples/texts/348597/page_234.md @@ -0,0 +1,9 @@ +7.9 MATLAB Commands Review + +Transposition (i.e., for vectors with real components, this changes a row into a column). + +**norm** Computes the Euclidean length of a vector. + +**cross** Calculates the cross product of two 3-D vectors. + +**det** Determinant; used here to compute the triple scalar product. \ No newline at end of file diff --git a/samples/texts/3747225/page_1.md b/samples/texts/3747225/page_1.md new file mode 100644 index 0000000000000000000000000000000000000000..50e3f0a6e754444c231202d696d3755b34ad72da --- /dev/null +++ b/samples/texts/3747225/page_1.md @@ -0,0 +1,22 @@ +# Parallel algorithms for approximation of distance maps on parametric surfaces + +Ofir Weber¹, Yohai S. Devir², Alexander M. Bronstein³, Michael M. Bronstein⁴, and Ron Kimmel⁵ + +We present an efficient $O(n)$ numerical algorithm for first-order approximation of geodesic distances on geometry images, where *n* is the number of points on the surface. The structure of our algorithm allows efficient implementation on parallel architectures. Two implementations on a SIMD processor and on a GPU are discussed. Numerical results demonstrate up to four orders of magnitude improvement in execution time compared to the state-of-the-art algorithms. + +**Categories and Subject Descriptors:** Numerical Analysis [G.1.0]: Parallel algorithms +Additional Key Words and Phrases: eikonal equation, geodesic distances, fast marching, geometry image, multiple charts, parallel algorithms, GPU, SIMD + +## 1. INTRODUCTION + +Approximation of geodesic distances on curved surfaces is an important computational geometric problem, appearing in many computer graphics applications. For example, several surface segmentation and editing methods are based on cutting the surface along geodesic paths [Katz and Tal 2004; Funkhouser et al. 2004]. Function interpolation on meshes requires the knowledge of geodesic distances, and has numerous uses such as skinning [Sloan et al. 2001] and mesh watermarking [Praun et al. 1999]. Isometry-invariant shape classification [Elad and Kimmel 2001; Hilaga et al. 2001; Mémoli and Sapiro 2005; Bronstein et al. 2006b], minimum-distortion parametrization [Zigelman et al. 2002; Zhou et al. 2004; Peyré and Cohen 2003], and non-rigid correspondence techniques [Bronstein et al. 2006a] require the matrix of all pair-wise geodesic distances on the surface. Other fields where the need to compute geodesic distance maps arises are medical imaging, geophysics[Sethian and Popovici 2006], and robot motion planning [Hershberger and Suri 1999] and navigation to mention a few. + +The problem of distance map computation can be formulated as the viscosity solution of the *eikonal equation*, + +$$ ||\nabla t|| = 1, \quad t(S) = 0, \tag{1} $$ + +where *S* is a set of source points on the surface. In optics and acoustics, the eikonal equation governs the propagation of waves through a medium. The solution of the eikonal equation demonstrates that light or acoustic waves traverse the path between two points, which takes the least time, a physics law known as *Fermat's principle*. + +In [1996], Sethian proposed an $O(n \log n)$ algorithm for first-order approximation of weighted distance maps on domains with weighted Euclidean metric, known as *fast marching*. A similar algorithm based on a different discretization of the eikonal equation was developed independently by Tsitsiklis [1995]. The main idea of fast marching is to simulate a wave front advancing from a set of source points *S*. The propagating front can be thought of as a “prairie fire” evolution towards directions where the grid has not yet been “burnt out”. At time $t=0$, the fire starts at the source points, and the algorithm computes the time values *t* for each vertex at which the advancing fire front reaches it. + +Algorithm 1 outlines the fast marching method. Solution of the eikonal equation starts by setting initial (usually zero) distance to the set of source points *S* and updating the neighboring points by simulating an advancing wavefront. The \ No newline at end of file diff --git a/samples/texts/3747225/page_10.md b/samples/texts/3747225/page_10.md new file mode 100644 index 0000000000000000000000000000000000000000..b89cd2cddc54fbb4e9b977a0b21c4dae57fb6b75 --- /dev/null +++ b/samples/texts/3747225/page_10.md @@ -0,0 +1,19 @@ +Fig. 4. Update of a point on a grid with eight-neighbor connectivity using the raster scan algorithm. First row: four directed raster scans; second row: the same raster scans rotated by 45°. + +of the Hessian **H***i*. This means that some parametrizations of the same surface may be less favorable for the raster +scan algorithm. For example, in the parametrization **x** = (u1 cos u2, u1 sin u2, 0)T of a flat disc, the characteristics in the +parametrization domain are curved and require multiple iterations to be covered. + +Note that the bound is a worst case bound; in practice the number of iterations required for convergence may be smaller. Adding another triangle to the grid update such that every grid point is updated from four “causal” (in the raster scan order) neighbors rather than from three causal neighbors as shown in Figure 4 may reduce the number of iterations. It is important to emphasize that in the worst case *N*iter will remain unchanged. + +The main disadvantage of the raster scan algorithm is the lack of flexibility in controlling the tradeoff between the algorithm complexity and accuracy, unlike some other distance approximation algorithms, like the phase flow [Ying and Candès 2006] and the approximate MMP algorithm [Surazhsky et al. 2005]. + +5. PARALLELIZATION + +The structure of the raster scan algorithm gives much opportunity for exploiting data independence to compute some +of the grid updates concurrently on a set of parallel computation units. To demonstrate the parallelism, let us consider +for example the right-down raster scan, starting from the top leftmost grid point $t_{11}$. After $t_{11}$ has been updated, the +points $t_{12}$ and $t_{21}$ can be updated concurrently, since their updates do not depend on each other. Next, the points $t_{31}$, +$t_{22}$ and $t_{13}$ are updated concurrently, and so on (Figure 5, left). Assuming the number of available computation units +is $P \ge \min\{M,N\}$, the right-down raster scan can be performed in $M+N-1$ steps, where at each step $k$ the points +along the line $i+j=k+1$ are updated. If the number of processors is smaller, every step is serialized into $[(k+1)/P]$ sub-steps. The other three directed raster scans are parallelized in the same manner. \ No newline at end of file diff --git a/samples/texts/3747225/page_11.md b/samples/texts/3747225/page_11.md new file mode 100644 index 0000000000000000000000000000000000000000..fcb806860a47aed2de204d9ef5520874a841db8c --- /dev/null +++ b/samples/texts/3747225/page_11.md @@ -0,0 +1,46 @@ +**Algorithm 3:** Raster scan algorithm on a single-chart geometry image. + +**Input:** Numerical $M \times N$ grid $\mathbf{U}$, set of source points $S \subset \mathbf{U}$ with the corresponding initial values $t(s)$ +**Output:** The distance map $t: \mathbf{U} \to \mathbb{R}^+$. + +*Initialization* + +1. Pre-compute the update equation coefficients for each triangle. + +2. **foreach** point $u \in \mathbf{U} \setminus S$ **do** $t(u) \leftarrow -\infty$ + +*Iteration* + +3. **for** iter = 1,2,... **do** + +4. **for** $i = 1, 2, ..., M$ **do** +   Right-up scan + +5. **for** $j = 1, 2, ..., N$ **do** Update $(u_{ij})$ +   Right-down scan + +6. **for** $j = N, N-1, ..., 1$ **do** Update $(u_{ij})$ + +7. **end** + +8. **for** $i = M, M-1, ..., 1$ **do** +   Left-up scan + +9. **for** $j = 1, 2, ..., N$ **do** Update $(u_{ij})$ +   Left-down scan + +10. **for** $j = N, N-1, ..., 1$ **do** Update $(u_{ij})$ + +11. **end** + +12. **if** $\|t^{(n)} - t^{(n-1)}\| = 0$ **then** stop + +13. **end** + +An obvious disadvantage of such a parallelization is the lack of data coherence in the memory, which may deteriorate performance on many architectures such as GPUs. Another disadvantage is the fact that the number of operations in each step is not constant and the benefit from the parallelization is obtained only on sufficiently long diagonals. A way to overcome these two difficulties is to rotate the direction of all raster scans by 45° (Figure 4, second row). Using the rotated raster scans, rows or columns of the grid can be updated concurrently (Figure 5, right). This allows coherent access to memory and provides better parallelization with a speedup factor of $P$. Since the same operations are performed to update all the grid points, the algorithm is suitable for implementation on a SIMD processor. We refer to this parallelized scheme as to *parallel marching*. + +## 5.1 Extensions to multi-chart geometry images + +The approach presented so far is limited to geometry images represented as a single chart, though the latter can be of arbitrarily complex topology (such a topology usually introduces “holes” in the parametrization domain, which can be handled efficiently by “masking” the update in those regions). This may be a major limitation in many practical application, where due to the varying level of detail on the surface, the representation as a single-chart geometry image is either inaccurate or inefficient. + +Here, we discuss a generalization of the raster scan algorithm to *multi-chart geometry images*, i.e. surfaces represented as an atlas of overlapping charts. Formally, we are given a collection of $K$ maps $\mathbf{x}^k : \mathbf{U}^k \to \mathbb{R}^3$, $k = 1, ..., K$, where each $\mathbf{U}^k$ is sampled on a regular Cartesian grid, usually, with different sampling density, according to the detail level of the underlying surface. For simplicity, we assume that the charts overlap only at the boundaries, i.e. for two neighboring charts $i$ and $j$, $\mathbf{x}^i(\mathbf{U}^i) \cap \mathbf{x}^j(\mathbf{U}^j) \subseteq \mathbf{x}^i(\partial \mathbf{U}^i) \cap \mathbf{x}^j(\partial \mathbf{U}^j)$. We denote by $\mathcal{P}_{ij}$ an operator projecting the values of the distance \ No newline at end of file diff --git a/samples/texts/3747225/page_12.md b/samples/texts/3747225/page_12.md new file mode 100644 index 0000000000000000000000000000000000000000..0062e292706b3bc264681c5cdbd3d657f3127e67 --- /dev/null +++ b/samples/texts/3747225/page_12.md @@ -0,0 +1,9 @@ +Fig. 5. Dependency graph in the right-down (left) and the rotated up-left-down (right) raster scan updates. Grid point updates that can be computed concurrently are numbered and shaded with different colors. + +map $t^i$ from $\partial U^i$ onto the shared portion of $\partial U^j$. + +The charts are organized as an undirected adjacency graph with $K$ vertices, and edges $(i, j)$ corresponding to each pair of adjacent charts $i$ and $j$. We denote by $N_i$ the collection of all the neighboring charts of the chart $i$. The problem of distance computation on a multi-chart geometry image can be thought of as distance computation in such an adjacency graph, in which each vertex represents a chart, and can be therefore solved using a generalization of the Dijkstra's algorithm (Algorithm 4). + +Each chart is associated with a quadruplet $(k,t,S_0,t_0)$, where $k=1,...,K$ is a chart index, and $t$ is a scalar distance value, whose assignment is discussed in the sequel. $t_0$ denotes a set of fixed values on $S_0 \subseteq U^k$ serving as the boundary conditions. Additionally, a distance map $t^k: U^k \to \mathbb{R}^+$ is maintained for each chart. The algorithm maintains a priority queue $Q$ holding as its entries the quadruplets $(k,t,S_0,t_0)$. Similarly to the standard Dijkstra's algorithm, the queue is sorted according to $t$. Initially, the queue is filled with the source values given as the input. For example, when computing the distance map from a single point $u \in U^k$, $Q$ is set to $\{(k,0,u,0)\}$ (note, however, that the source is not necessarily limited to a single point, and may span across multiple charts). The algorithm proceeds by removing the quadruplet $(k,t,S_0,t_0)$ with the minimum $t$ from the queue, and running the single-chart raster scan algorithm on $U^k$ with $S_0$ and $t_0(S_0)$ serving as the source. This produces the distance map $t^k: U^k \to \mathbb{R}^+$. Next, the values of $t^k$ on the chart boundary are projected onto the neighbor charts $U^i$, $i \in N_k$, using the operators $\mathcal{P}_{ki}$. In order to guarantee monotonicity, the minimum between the extrapolated value $\mathcal{P}_{ki} t^k$ and the current value of $t^i$ is used. We denote by $\delta^i$ the maximum difference between the previous and the new value of $t^i$ on the shared portion of boundary $\partial U^i$. A non-trivial $\delta^i$ implies that the distance map $t^i$ is not up-to-date. In such a case, points on the boundary $\partial U^i$ where the distance map value has decreased are added to the initial set of source points, and the chart is added to the priority queue with the distance value $t$ set to be the minimum value of the updated points. The algorithm terminates when the queue becomes empty, implying by construction that all distance maps are up-to-date. + +Unlike the standard Dijkstra's algorithm, the described procedure is no more a single-pass algorithm. In fact, a geodesic can pass back and forth from one chart to another, resulting in multiple updates of both. However, the number of such repetitions is bounded under conditions similar to those stated in Proposition 1. \ No newline at end of file diff --git a/samples/texts/3747225/page_13.md b/samples/texts/3747225/page_13.md new file mode 100644 index 0000000000000000000000000000000000000000..a6253ec2533700f21319b67d8b1b7dcc483aaf70 --- /dev/null +++ b/samples/texts/3747225/page_13.md @@ -0,0 +1,51 @@ +**Algorithm 4:** Raster scan algorithm on a multi-chart geometry image. + +**Input:** K grids $\mathbf{U}^1, \dots, \mathbf{U}^K$; projection operators $\mathcal{P}_{ij}$; sources with corresponding initial values +$$ \{(S_{\text{init}}^1, t_{\text{init}}^1), \dots, (S_{\text{init}}^K, t_{\text{init}}^K)\} $$ on one or more charts. +**Output:** The distance maps $t^k: \mathbf{U}^k \to \mathbb{R}^+$ for $k=1, \dots, K$. + +1. Initialize the priority queue to $Q = \emptyset$. + +2. **for** $k = 1, \dots, K$ **do** + +3. **if** $S_{\text{init}}^k \neq \emptyset$ **then** set $t_{\text{min}} = \min\{t_{\text{init}}^k(u) : u \in S_{\text{init}}^k\}$, and add $(k, t_{\text{min}}, S_{\text{init}}^k, t_{\text{init}}^k)$ to the queue. + +4. **end** + +5. **while** $Q \neq \emptyset$ **do** + +6. Set $(k, t_{\text{min}}, S_0, t_0) = \operatorname*{argmin}\{t_{\text{min}} : (k, t_{\text{min}}, S_0, t_0) \in Q\}$, and remove it from the queue. + +7. Run the single-chart raster scan algorithm to compute the distance map $t^k$ on $\mathbf{U}^k$ using $t_0(S_0)$ as the source. + +8. **forall** $i \in \mathcal{N}_k$ **do** + +9. Set $\bar{t}_i = \mathcal{P}_{ki} t^k$. + +10. **forall** $u \in \partial \mathbf{U}^i \cap \partial \mathbf{U}^k$ **do** + +11. Set $t^i(u) = \min\{\bar{t}_i(u), t_i(u)\}$ for all $u \in \partial \mathbf{U}^i$, and + +12. **end** + +13. Set $S_{\text{upd}}^i = \{u \in \partial \mathbf{U}^i \cap \partial \mathbf{U}^k : t_i(u) - \bar{t}_i(u) > \varepsilon\}$, and update $t^i(u) = \bar{t}_i(u)$ on $u \in S_{\text{upd}}^i$. + +14. **if** $S_{\text{upd}}^i \neq \emptyset$ **then** compute $t_{\text{min}}^i = \min\{t^i(u) : u \in S_{\text{upd}}^i\}$; **else** set $t_{\text{min}}^i = \infty$. + +15. **end** + +16. Find $i = \operatorname*{argmin}\{t_{\text{min}}^i : i \in \mathcal{N}_k\}$. + +17. **if** $t_{\text{min}}^i < \infty$ **then** set $S_0 = S_{\text{init}}^i \cup \partial \mathbf{U}^i$, $t_0 = t_{\text{init}}^i(S_{\text{init}}^i) \cup t^i(\partial \mathbf{U}^i)$, and add $(i, t_{\text{min}}^i, S_0, t_0)$ to the queue. + +18. **end** + +## 6. DISTANCE MAP COMPUTATION ON A GPU + +Modern GPUs are extremely powerful processors, capable of performing near trillions of operations (teraflop) per second. The reason for such high performance originates from the computation-hungry computer graphics applications, such as rendering and texture mapping. Though the first GPUs were designed exclusively for these applications, the availability of so powerful architectures led to numerous attempts to employ graphics hardware for computationally-demanding applications besides computer graphics, e.g. scientific computing. This has evolved into a trend of general-purpose computing on GPUs [GPG], to which the manufacturers of graphics processors responded with developing a new generation of programmable GPUs. NVIDIA [CUD] and AMD [CTM], the two major GPU vendors, released their first GPGPU environments in early 2007. The new platforms completely hide the low level graphics functionality of the GPU and exposes the GPU as a general massively parallel machine capable of running thousands of threads concurrently. With the new environment, the developers do not need to have prior computer graphics knowledge and the programming is done by using high-level languages such as C. + +### 6.1 CUDA + +In this paper, we used the Compute Unified Device Architecture (CUDA) platform developed by NVIDIA for the implementation of the PMM algorithm. Similar results could be obtained by using the AMD platform [CTM]. For the sake of completeness, we briefly overview the most important features of CUDA; for a comprehensive review, refer to the CUDA programming guide [CUD]. + +The G8X series GPUs supporting CUDA have multiple independent processing units. When programmed using \ No newline at end of file diff --git a/samples/texts/3747225/page_14.md b/samples/texts/3747225/page_14.md new file mode 100644 index 0000000000000000000000000000000000000000..dad8cd97ca0f50ae18efffe45b06770ab9b5b50b --- /dev/null +++ b/samples/texts/3747225/page_14.md @@ -0,0 +1,15 @@ +CUDA, the GPU is viewed as a processing unit capable of executing thousands of threads in parallel. Both the CPU (referred to as *host*) and the GPU (referred to as *device*) maintain their own memory. Data can be transferred from one memory to another over the PCIe bus, yet the bandwidth of this bus (4 GB/sec) is significantly smaller compared to the bandwidth of internal buses on the GPU (100 GB/sec for the latest NVIDIA GPUs). + +CUDA provide access to device DRAM memory either through global memory or texture memory. In addition, CUDA features on-chip shared memory with extremely fast general read/write access, used to share data between threads. Texture memory is a read-only memory with a cache optimized for 2D spatial locality. The global memory is a read/write non-cached memory space. Access to device memory is relatively slow compared to the speed of arithmetic calculations, making GPUs especially suitable for programs with high *arithmetic intensity* (ratio between ALU and memory access operations) and potentially inefficient for those with a low one. For example, the NVIDIA 8800GTX GPU can theoretically perform 345 G floating points operations/sec, while having the memory bandwidth of only 86 GB/sec = 22G floats/sec (when latency is completely hidden). In order to get better utilization of the computational power of the GPU and avoid the memory bandwidth bottleneck, memory access to device memory should be minimized. One way of doing so is by fetching a large portion of data from the global memory into shared memory (access to shared memory can be as fast as reading from a register) followed by as much as possible computations, and finally writing back the result to the global memory. The shared memory in this case can be thought of as user-managed cache. + +Access to global memory should be coherent in a way that subsequent threads should access subsequent addresses in linear memory. There is no obligation to use such an access pattern. However, incoherent accesses will lead to extremely slow memory bandwidth. Texture memory is more flexible in the sense that coherence is two-dimensional. However, this is a read-only memory and there is no manual control over the caching scheme. + +The architecture of the computational units in the GPU is *single-program-multiple-data* (SPMD), allowing to execute the same function independently on different data. Functions executed in this way are called *kernels*; the execution of a kernel is organized as a grid of *thread blocks*. A thread block is a batch of *threads* that can cooperate together by efficiently sharing data through fast shared memory and synchronizing their execution to coordinate memory accesses. The maximum number of threads in a single thread block is limited, however, many thread blocks can be batched together into a grid of blocks that can be processed by the device in parallel. It is important to note that communication and synchronization between threads of different thread blocks on the same grid is impossible. The only way to impose a global synchronization point on all threads is to divide the work into separate kernel invocations. + +A serious limitation of the device memory is the memory latency (400–600 cycles). Much of this latency can be hidden by the GPU thread scheduler if there are sufficient independent arithmetic instructions that can be issued while waiting for memory accesses to complete. This means that while some threads are stalled by memory latency, others can progress with ALU computations. This is only possible if there are enough threads waiting for execution which implies that a grid of thread blocks should contain as many as possible threads. + +## 6.2 Algorithm + +Although CUDA is a significant step towards general purpose computing on GPUs, mapping a sequential algorithm from CPU to GPU is not trivial. Besides requiring a parallel version of the algorithm, certain restrictions should be fulfilled in order for the implementation to be efficient. In this section, we describe how to efficiently map PMM to GPU architecture. + +As we mentioned in Section 5, the update of an entire $M \times N$ grid can be done in 4 subsequent scans (*up*, *down*, *left* and *right*). Each scan is further serialized into smaller parallel steps. For example, the *up* scan is composed of *M* serialized steps. In each step, *N* vertices in a row are updated in parallel. The distance map is allocated in the \ No newline at end of file diff --git a/samples/texts/3747225/page_15.md b/samples/texts/3747225/page_15.md new file mode 100644 index 0000000000000000000000000000000000000000..3b76a2bcd7c565174004d7d92eef28f39e90f0dd --- /dev/null +++ b/samples/texts/3747225/page_15.md @@ -0,0 +1,18 @@ +global memory space as a read/write linear array of 32-bit floating point values. The geometrical properties of the +underlying surface are stored in the read-only texture memory space. We can pre-compute the coefficients used by +the numerical update scheme at the pre-processing stage and store them into the texture memory instead of the actual +locations of the vertices. + +Straightforward implementation is done by mapping each row or column of the grid to a single kernel call. Each thread in each thread block updates the distance at a single vertex according to the previous distance at that vertex and three distances of the vertices at the previous row/column (Figure 6). Kernel invocations serve as global synchronization point so we can be sure that the next row/column will be processed only after the previous row/column is fully updated. The memory access for the *up* and *down* scans are coherent, yet, the *left/right* scans use an incoherent memory access pattern, since the addresses of elements in a single column are far from each other in linear memory. + +In order to overcome this limitation, we propose to organize the data in the following way. We allocate 2 different arrays to hold the distance maps. The first map is used solely for the *up* and *down* scans, while the second map is used solely for the *left* and *right* scans. The *left/right* map is stored in a transposed manner so we access both *up/down* and *left/right* distance maps on a row-by-row basis. Since each scan depends on the result of the previous scan, we must copy the results obtained by the *up* and *down* scans from the *up/down* map to the *left/right* map. The task of copying the map in a transposed manner can be done efficiently with a single kernel invocation and without violating the coherence.⁶ The basic idea is to first decompose the matrix into smaller blocks that can fully reside in the shared memory. Each block is transposed separately and is written in a coherent manner back to the global memory. + +The proposed memory organization results in a better performance, but suffers from a different bottleneck. Invoking a single kernel for each row/column in the grid leads to $2M + 2N$ kernel invocations. A kernel invocation consumes a fixed overhead regardless of how many computations are done in that kernel (up to 20 µsec per kernel). To demonstrate the severity of the problem, consider a grid with $3000 \times 3000$ points. The total time for the kernel invocations alone will be approximately $(2 \times 3000 + 2 \times 3000) \times 20$ µsec = 240 msec. This time alone exceeds the total time of our optimized kernel computation by nearly an order of magnitude (see Table I). + +A possible remedy is using a kernel that processes a batch of rows rather than a single row at a time. Each batch is composed of a strip of $\Omega$ consecutive rows, such that the total number of kernel invocations is reduced by a factor of $\Omega$, to $\frac{2M+2N}{\Omega}$. For each row in the strip, each thread fetches the former distance at that vertex from the distance map into the shared memory. The thread then calculates the updated distance at that vertex and writes the result back to the distance map. All threads are then synchronized. Once all the threads reach the synchronization point, each thread can start working on the calculation of the next row in the strip. Besides having the advantage of reduced number of kernel invocations, this access pattern also leads to higher arithmetic intensity, since for a large enough $\Omega$, a single fetch from the distance map per vertex is required (instead of four), since we can keep the computed distances of the previous row in the shared memory and do not need to read them again from global memory as we advance to the next row. + +On the other hand, a new problem arises. While communication through shared memory and synchronization between threads of the same thread block is possible, there is no synchronization mechanism between threads of different thread blocks. Since the maximum number of threads in a thread block is limited (512 on latest NVIDIA GPU), we are limited to small grids only. Moreover, modern GPUs have several independent multiprocessors, working in parallel (16 on latest NVIDIA GPU) and since each thread block can be processed by a single multiprocessor, the utilization of the hardware will be poor. + +Figure 6 shows a small portion of the distance map that is handled by a single thread block with 32 threads and $\Omega = 8$ rows. Note that thread 0 is guaranteed to produce valid result only at row 0 since at any other row, the update depends + +⁶Refer to the “matrix transpose” example in [CUD] for further details. \ No newline at end of file diff --git a/samples/texts/3747225/page_16.md b/samples/texts/3747225/page_16.md new file mode 100644 index 0000000000000000000000000000000000000000..2764ea35a238a1fe801e8f7c1fa844f56143337e --- /dev/null +++ b/samples/texts/3747225/page_16.md @@ -0,0 +1,7 @@ +Fig. 6. Top: vertex (i, j) is updated based on two neighboring triangles. Bottom: a small portion of the distance map that is handled by a single thread block with 32 threads and Ω = 8 rows. The light gray area is safe, while the dark gray one might contains errors. + +Fig. 7. Top: three overlapping blocks with corresponding “safe zones”. Vertices in the overlapping region are computed twice by two threads belongs to adjacent thread blocks. Bottom: write access pattern. Each thread belongs to exactly one parallelogram, hence writes a single value to the distance map. + +on vertices which are located left to row 0. Values that comes from the left cannot be trusted since they belongs to a different thread block. The potential errors on the first column may lead to errors on the second column as well (rows 2 to 7). In general, only the area shown in light gray in Figure 6 is a “safe zone”. + +An important observation is that we can still maintain consistent updates if we allow some partial overlap between subsequent thread blocks, such that each thread block updates vertices only in the “safe zone”. The blocks should overlap in such a way that each vertex of the grid belongs to at least one “safe zone”. Figure 7 shows a small portion of the distance map covered by three overlapping blocks. Note that some vertices belong to two “safe zones”. In order to avoid write conflicts, we demand that in such a case, only the block on the right will write the updated distance to the global memory. This write access pattern is illustrated in Figure 7, where each vertex is updated by exactly one thread belonging to one of the parallelograms. \ No newline at end of file diff --git a/samples/texts/3747225/page_17.md b/samples/texts/3747225/page_17.md new file mode 100644 index 0000000000000000000000000000000000000000..c25514204ac9df2117bda6c67daf3af11eb1c6fd --- /dev/null +++ b/samples/texts/3747225/page_17.md @@ -0,0 +1,21 @@ +The amount of redundancy that is introduced due to repetitions depends on the ratio between the number of threads in a single block and Ω. In order to reduce the redundancy, we would like to keep Ω small for a fixed block size. On the other hand, larger value of Ω decrease the number of kernel invocations and improve the arithmetic intensity. We conclude that a correct balance between the block size and Ω is a key to achieving good hardware utilization. + +Another way to reduce the number of kernel invocations and increase the parallelism is to combine the four grid scans into a single scan. Performing several scans in parallel does not change the correctness of the algorithm since in the worse case, it will have to perform four times more iterations in order to converge. When a geodesic in the parametrization domain changes its direction smoothly from a zone affected by the *up* scan to a zone affected by the *down* scan, it must cross the *left* or *right* zones first. For non-smooth geodesics with abrupt angles at some points, this will not hold and additional iterations might be needed. Since this case is rare, we can safely combine the *up* and *down* scans into a single scan followed by another scan combining the *left* and *right* scans. + +## 7. NUMERICAL RESULTS + +In order to demonstrate the efficiency of the proposed algorithms, we consider its two particular implementations. In the first implementation, the PMM algorithm was implemented in C on an Intel Pentium platform, with the update step written in inline assembly and taking advantage of the SSE2 extensions (Intel SIMD architecture).⁷ + +The second implementation was developed on an NVIDIA 8800GTX GPU with 768 MB memory using the CUDA environment.⁸ This GPU has 16 independent multiprocessors, each containing 8 processors. During fine-tuning of our code, we ran it on variable grid sizes {64k × 64k : k ∈ 1...47}. For each grid, we measured performance on several block sizes and different values of Ω and recorded the configurations which minimized the running time. In most runs, Ω = 16 produced the best results. For relatively small grids (up to 768 × 768), a block size of 64 produced the best results. The reason is that the use of large blocks leads to a small number of active blocks at any given time, hence, resulting in not all the GPU multiprocessors being active. On large grids, a block size of 256 resulted in the best performance, reducing the amount of wasted computations to approximately 20% (note that even though the maximum block size is 512, configurations with too large blocks may lead to slower performance due to internal synchronization between threads in the same block or may even result in a non valid configuration for the hardware due to elimination of all hardware resources (registers, shared memory, etc.). + +32-bit (single precision) floating point representation was used in both implementations. Pre-computation of geometry coefficients were excluded from time measurements (pre-computation on the GPU took around *9ms* of preprocessing time on the largest grid with 3008 × 3008 vertices). + +### 7.1 Performance benchmarks + +Table I presents the execution times of the SSE2 and GPU implementations of the parallel marching algorithm on the sphere surface, with the number of vertices ranging from four thousand to nine million. In all cases, PMM converges in one iteration. Grid construction time (taking less than 10% of a single iteration time) was not measured. For comparison, execution time of the exact and approximate MMP algorithms implemented in [Surazhsky et al. 2005] are presented. + +As appears from the table, the GPU outperforms its rivals on all grid sizes but the gap becomes more pronounced on large grids, where it outperforms the SSE2 implementation by nearly two orders of magnitude, achieving up to + +⁷A packaged library is available from http://www.cs.technion.ac.il/~weber/Shared/PMM/PMM_SSE2.rar. + +⁸A packaged library is available from http://www.cs.technion.ac.il/~weber/Shared/PMM/PMM.rar. \ No newline at end of file diff --git a/samples/texts/3747225/page_18.md b/samples/texts/3747225/page_18.md new file mode 100644 index 0000000000000000000000000000000000000000..44ff0f3f4ace85c75a7106b84f98ec7996d5c48b --- /dev/null +++ b/samples/texts/3747225/page_18.md @@ -0,0 +1,19 @@ +
VerticesMMP
(Exact)
MMP
(Approx.)
PMM
SSE2
PMM
GPU
4.1 × 1030.44060.09820.00110.0006
16.4 × 1034.45660.38980.00420.0009
65.5 × 10354.8861.65030.01720.0015
0.49 × 10613.6470.13080.0045
0.86 × 10625.6390.23060.0059
2.56 × 1060.67910.0133
4.19 × 1061.13010.0287
9.04 × 1060.0389
+ +Table I. Execution time (in seconds) of different geodesic distance computation algorithms on the sphere surface. For the exact and approximate MMP algorithms, the code by Surazhsky *et al.* was used. Execution time of PMM is given for one iterations required for algorithm convergence. + +
Grid stepMMP Approx.PMM
Mean abs errMean rel errMax abs errMean abs errMean rel errMax abs err
1.56 × 10-22.47 × 10-31.79 × 10-39.99 × 10-37.11 × 10-36.49 × 10-31.26 × 10-2
7.81 × 10-31.17 × 10-38.52 × 10-44.88 × 10-34.67 × 10-34.34 × 10-38.15 × 10-3
3.91 × 10-35.73 × 10-44.16 × 10-42.41 × 10-32.91 × 10-32.74 × 10-35.05 × 10-3
1.42 × 10-32.05 × 10-41.49 × 10-48.71 × 10-41.38 × 10-31.31 × 10-32.37 × 10-3
1.08 × 10-31.55 × 10-41.13 × 10-46.60 × 10-41.08 × 10-31.03 × 10-31.86 × 10-3
6.25 × 10-47.23 × 10-46.92 × 10-41.24 × 10-3
4.88 × 10-45.92 × 10-45.68 × 10-41.01 × 10-3
3.22 × 10-44.36 × 10-44.16 × 10-47.38 × 10-4
+ +Table II. Accuracy of the approximate MMP and PMM on the sphere surface, measured in terms of the mean absolute ($L_1$) error, maximum absolute ($L_\infty$) error, and mean relative error as a function of the grid sampling step. In both cases, the error depends approximately linearly on the sampling step, in accordance with the theoretical first-order accuracy. + +240 million distance computations per second. For the same grid size, the SSE2 and the GPU PMM outperform the state-of-the-art approximate MMP algorithm by three and four orders of magnitude, respectively. + +The data transfer rates between the CPU and the GPU are limited by the bus bandwidth (theoretical 4 GB/sec, observed 2.6 GB/sec). For example, for the largest grid with nine million vertices, the download time was 53ms (4 floats per vertex) and the upload time was 17ms (one float per vertex). In most typical scenarios, the geometry image is transferred to the GPU only once and several distance maps are computed, making this preprocessing time negligible. Moreover, modern GPUs are capable of performing kernels in asynchronous manner leading to better hiding of data transfer overheads. + +Table II presents the accuracy of different geodesic distance computation algorithms, quantified in terms of the mean absolute ($L_1$) error, maximum absolute ($L_\infty$) error, and mean relative error for different grid sampling steps. This comparison gives a fairly conservative bound on the accuracy of PMM, as the latter depends on the specific surface parametrization. In Figure 8, the complexity is presented as a function of the algorithm accuracy and the grid size. The SSE2 PMM achieves the accuracy of the approximate MMP algorithm at less than 10% of its complexity, whereas the GPU implementation requires about 0.3% complexity. + +Figure 9 presents the number of distances per second computed by the GPU. Best utilization of the computational units is achieved for meshes exceeding five million vertices, where the rate reaches about 240 million distances per second. + +For the largest grid containing 9.04 million vertices, the peak GPU memory consumption is around 364 megabytes. + +ACM Transactions on Graphics, Vol. V, No. N, Month 20YY. \ No newline at end of file diff --git a/samples/texts/3747225/page_19.md b/samples/texts/3747225/page_19.md new file mode 100644 index 0000000000000000000000000000000000000000..d24f790e63bb89940a10b6925caf01095354f1bd --- /dev/null +++ b/samples/texts/3747225/page_19.md @@ -0,0 +1,15 @@ +Fig. 8. Left: execution time vs. accuracy of different geodesic distance computation algorithms: GPU and SSE2 implementations of PMM (solid blue and dashed green lines, respectively), and approximate MMP algorithm (dash-dotted red). Right: execution time vs. grid size; same legend with the addition of the approximate MMP (dash-dotted red), and exact MMP (dotted cyan) algorithms. + +## 7.2 Convergence + +The dependence of the distance map accuracy on the number of iterations $N_{iter}$ is visualized in Figure 10, which shows the distance map computed from a single point source on the “maze” surface with complicated spiral characteristics. As it appears from the figure, the algorithm converges in six iterations, achieving the mean absolute error of 0.0024 of the shape diameter, and the mean relative error of 0.0058. + +While multiple iterations are required to compute a faithful distance map on the “maze” surface, it is important to stress that in general our practice shows that very few iterations are sufficient to obtain accurate distance map on most surfaces. + +## 7.3 Geodesic paths, offset curves, and Voronoi diagrams + +Figure 11 shows several computational geometric operations requiring the knowledge of a distance map on a surface. For this visualization, a face surface from the Notre Dame University database was used [Chang et al. 2003]. The surface contained 21,775 vertices and 42,957 faces. In the first two examples, a distance map was computed from a point source located at the tip of the nose. Equi-distant contours were computed using the marching triangle technique in the parametrization domain and then projected back onto the surface. Minimum geodesic paths were computed by backtracking the curve from some starting point along the gradient of the distance map $t$ in the parametrization domain. Formally, geodesic computation can be thought of as solution of the ordinary differential equation + +$$ \dot{\gamma} = -\mathbf{G}^{-1}\nabla_{\mathbf{u}}t, \quad (31) $$ + +where $\gamma(s)$ is the geodesic path in the parametrization domain and $\Gamma(s) = \mathbf{x}(\gamma(s))$ is the geodesic on the surface. A first-order integration technique was used to compute $\gamma(s)$. \ No newline at end of file diff --git a/samples/texts/3747225/page_2.md b/samples/texts/3747225/page_2.md new file mode 100644 index 0000000000000000000000000000000000000000..904dd0d665a39eeb1094bd2714d05b4eaa93ca79 --- /dev/null +++ b/samples/texts/3747225/page_2.md @@ -0,0 +1,7 @@ +algorithm is constructed similar to Dijkstra's algorithm for finding shortest paths in graphs. It maintains a set of fixed vertices $S$, for which the time of arrival has already been computed, and a priority queue $Q$ of all other vertices sorted by their times of arrival. The basic operation of the fast marching algorithm is the *update* step, which computes the time of arrival of the wavefront to a grid point based on the times of arrival to its neighbor points. + +By construction, the updated value cannot be smaller than the values of the supporting vertices. This monotonicity property ensures that the solution always propagates outwards by fixing the vertex with the smallest $t$. The latter implies that the values of grid points in $S$ vertices are never recomputed. Since the *update step* has constant complexity, the overall complexity of the fast marching algorithm is determined by the procedure that finds the smallest $t$ in the priority queue $Q$. Heap sorting-based priority queue allows to implement this task in $\mathcal{O}(\log n)$, where $n$ is the number of grid vertices. Since each vertex is removed from $Q$ and inserted to $S$ only once, the overall complexity is $\mathcal{O}(n \log n)$. + +Over the last decade, the fast marching algorithm was generalized to arbitrary triangulated surfaces [Kimmel and Sethian 1998], unstructured meshes [Sethian and Vladimirsky 2000], implicit unorganized surfaces [Mémoli and Sapiro 2001], and parametric surfaces [Spira and Kimmel 2004]. Higher-order versions of fast marching were also proposed [Sethian and Vladimirsky 2000]. Besides fast marching, there exist other families of numerical algorithms for approximate and exact computation of geodesic distances on surfaces, among which the most notable one is the Mount-Mitchel-Papadimitriou (MMP) algorithm [1987], whose most recent approximate implementation by Surazhsky et al. [2005] appears to be the fastest distance computation code available in public domain. Another recent algorithm is presented in [Jeong and Whitaker 2007]. + +In this paper, we explore the problem of geodesic distance map approximation on regularly sampled parametric surfaces (often referred to as geometry images), a representation becoming growingly popular as an alternative to unordered triangular meshes [Gu et al. 2002]. The paper is organized as follows. In Section 2, we formulate the eikonal equation on parametric surfaces. Section 3 is dedicated to the update step. We show a compact expression in matrix-vector form for a first-order update step on geometry images based on the planar wavefront model. We show that the scheme is numerically stable, which allows its use with low-precision arithmetics. We also study the update step based on the spherical wavefront model proposed by Novotni and Klein [2002] and indicate its numerical difficulties. Section 4 presents a raster scan algorithm for approximate distance map computation on geometry images. The proposed algorithm can be thought of as a generalization of Danielsson's raster scan method [1980] to geometry images, or as a raster-scan version of the parametric fast marching algorithm [Spira and Kimmel 2004]. We show that the raster scan algorithm converges with a bounded number of iterations, which enables its use for geodesic distance map computation. In Section 5, we discuss two parallel implementations of the raster scan algorithm on a SIMD processor and a GPU, which we refer to as the *parallel marching method* or PMM for short. Graphics hardware has been previously used for computation of distance maps and Voronoi diagrams on the plane or in the three-dimensional Euclidean space [Sigg et al. 2003; Hoff et al. 1999; Fischer and Gotsman 2005; Sud et al. 2006]. However, the use of vector processors for computation of geodesic distance maps is a different and significantly more complex problem, which to the best of our knowledge, has not been yet addressed in the literature, perhaps, with the exception of [Carr et al. ]. We also discuss the extension of the proposed algorithm to geometry images represented using multiple charts, which is of critical importance in many practical applications. In Section 7, we present numerical tests and performance benchmarks for our algorithms. Parallel marching methods outperform the state-of-the-art distance computation algorithms by up to four orders of magnitude on commodity hardware, making feasible real-time implementation of many applications, where the complexity of geodesic distance computation has been so far prohibitively high. Section 8 concludes the paper. \ No newline at end of file diff --git a/samples/texts/3747225/page_20.md b/samples/texts/3747225/page_20.md new file mode 100644 index 0000000000000000000000000000000000000000..7e3dd91a82ace60539bfac417f14f8125c6dfc75 --- /dev/null +++ b/samples/texts/3747225/page_20.md @@ -0,0 +1,15 @@ +Fig. 9. Number of distance computations per second for the GPU implementation of PMM. Best utilization is achieved for grids exceeding five million vertices. + +In the third example, a distance map from 20 random points on the surface was computed and a geodesic Voronoi diagram was found using marching triangles. In the fourth example, the distance map was computed from two disconnected curves and marching triangles were used to trace the geodesic offset curves. + +## 7.4 Multi-chart geometry image + +We demonstrate the distance computation algorithm for multi-chart on a “bumped torus” surface (Figure 12), represented using four 100 × 100, 150 × 150, 250 × 250, and 500 × 500 charts, each spanning a fourth of the surface and having the sampling density adjusted to the level of detail. Bilinear interpolation was used as the projection operators $\mathcal{P}_{ij}$. Figure 13 depicts the progress of Algorithm 4, in this example terminating after a single pass. + +# 8. CONCLUSION + +We presented a raster scan-based version of the fast marching algorithm for computation of geodesic distances on geometry images. The structure of the algorithm allowed its efficient parallelization on SIMD processors and GPUs, which have been considered in this paper. Numerical experiments showed that the proposed method outperforms state-of-the-art methods for first-order distance map approximation by one or two orders of magnitude, thus allowing real-time implementation of applications involving intensive geodesic distance computations. In our sequel works, we are going to demonstrate some of such applications. We also showed a generalization of the presented approach to multi-chart geometry images, which is important in many practical applications. + +## Acknowledgment + +ACM Transactions on Graphics, Vol. V, No. N, Month 20YY. \ No newline at end of file diff --git a/samples/texts/3747225/page_21.md b/samples/texts/3747225/page_21.md new file mode 100644 index 0000000000000000000000000000000000000000..7ccc1c86f374a90e08ea3e137652f78f45966b01 --- /dev/null +++ b/samples/texts/3747225/page_21.md @@ -0,0 +1,18 @@ +Fig. 10. Left-to-right, top-down: progress of PMM on the “maze” surface, initialized with the source point in the middle. Equidistant contours in the parametrization domain are shown. White regions stand for infinite distance. The algorithm converges after six iterations. + +The authors thank Tania Surazhsky for the MMP code, Mark Silberstein for insightful discussions, and the anonymous reviewers for their valuable comments. We also wish to thank NVIDIA for their contribution of the GeForce 8800GTX graphics card. This research was partly supported by United States–Israel Binational Science Foundation grant No. 2004274 and by the Ministry of Science grant No. 3-3414, and in part by Elias Fund for Medical Research. + +A. PROOF OF PROPOSITION 1 + +The algorithm will stop after *N* iterations, if the distance map remains unchanged between iteration *N* − 1 and *N*. This, in turn, happens when *N* − 1 iterations are sufficient to cover any characteristic in the parametrization domain. The number of iterations can therefore be bounded by bounding the total variation of the tangential angle of a characteristic. Our proof generalizes [Qian et al. 2006], where a similar result was shown for the Euclidean case. + +Let $\Gamma(s)$ be the characteristic curve with respect to the surface, $s$ its arclength, and $\gamma(s) = (u^1(s), u^2(s))^\mathrm{T}$ its parametrization in $\mathbf{U}$. Since $\Gamma(s) = \mathbf{x}(\gamma(s))$, using the chain rule we obtain + +$$ +\begin{aligned} +\dot{\bar{\Gamma}} &= \mathbf{T}\dot{\gamma} \\ +\ddot{\bar{\Gamma}} &= \mathbf{T}\ddot{\gamma} + \mathbf{r}, +\end{aligned} +\quad (32) $$ + +ACM Transactions on Graphics, Vol. V, No. N, Month 20YY. \ No newline at end of file diff --git a/samples/texts/3747225/page_22.md b/samples/texts/3747225/page_22.md new file mode 100644 index 0000000000000000000000000000000000000000..775e1d73f38c82d04e5f5971febadb7b2c42cf76 --- /dev/null +++ b/samples/texts/3747225/page_22.md @@ -0,0 +1,18 @@ +Fig. 11. Computation of distance maps on a geometry image (21,775 vertices, 42,957 faces). Left-to-right: equi-distant contours; minimum geodesic paths; geodesic Voronoi diagram; offset curves. + +Fig. 12. A four-chart geometry image of a torus with bumps. Chart boundaries are plotted as bold black lines. Note that each chart has a different level of details and, consequently, a different sampling density. + +where $\mathbf{r} = (\dot{\gamma}^\mathrm{T}\mathbf{H}^1\dot{\gamma}, \dot{\gamma}^\mathrm{T}\mathbf{H}^2\dot{\gamma}, \dot{\gamma}^\mathrm{T}\mathbf{H}^3\dot{\gamma})^\mathrm{T}$ and $\mathbf{H}^i = \nabla_{\mathbf{u}\mathbf{u}'}x^i$ are the Hessian matrices of $x^i$ with respect to the parametrization coordinates $\mathbf{u}$. Since $\Gamma$ is a geodesic, $\ddot{\Gamma}$ is normal to the surface and hence + +$$0 = \mathbf{P}_{\mathrm{T}}\ddot{\Gamma} = \mathbf{T}\ddot{\gamma} + \mathbf{P}_{\mathrm{T}}\mathbf{r}, \qquad (33)$$ + +where $\mathbf{P}_{\mathrm{T}}$ denotes the projection on the tangent space. + +Hence, + +$$ +\begin{align} +\|\mathbf{T}\ddot{\gamma}\| &= \|\mathbf{P}_{\mathrm{T}}\mathbf{r}\| \le \|\mathbf{r}\| \\ +&\le \sqrt{(\lambda_{\min}^{\mathbf{H}^1})^2 + (\lambda_{\min}^{\mathbf{H}^2})^2 + (\lambda_{\min}^{\mathbf{H}^3})^2 \cdot \|\dot{\gamma}\|}, \tag{34} +\end{align} +$$ \ No newline at end of file diff --git a/samples/texts/3747225/page_23.md b/samples/texts/3747225/page_23.md new file mode 100644 index 0000000000000000000000000000000000000000..2c0d28ad52585bc58d6ae037049ba3c7a6ce6dea --- /dev/null +++ b/samples/texts/3747225/page_23.md @@ -0,0 +1,20 @@ +Fig. 13. Left-to-right bottom-down: the progress of the multi-chart fast marching algorithm on the bumped torus geometry image from Figure 12. First, the distance map from the source point is computed on the upper left chart. The distance values on the boundaries are extrapolated onto the neighboring charts, in which the distance maps are computed subsequently in the order of the minimum distance value on the boundary. Color map depicts the level sets of the distance map. In this example, a single pass of the algorithm gives an accurate distance map. + +where $\lambda_{\min}^H$ is the smallest eigenvalue of the Hessian $H^i$. + +Since $\Gamma$ is a geodesic, $\|\dot{\Gamma}\| = 1$. From (32) we have + +$$1 = \|\dot{\Gamma}\|^2 = \dot{\gamma}^T T T \dot{\gamma} = \dot{\gamma}^T G \dot{\gamma} \geq \lambda_{\min}^G \cdot \|\dot{\gamma}\|^2. \quad (35)$$ + +Hence, $1/\lambda_{\max}^G \leq \|\dot{\gamma}\|^2 \leq 1/\lambda_{\min}^G$. In a similar manner, + +$$\|\mathbf{T}\ddot{\gamma}\|^2 = \ddot{\gamma}^T \mathbf{T}^T \mathbf{T} \ddot{\gamma} = \ddot{\gamma}^T G \ddot{\gamma} \geq \lambda_{\min}^G \cdot \|\ddot{\gamma}\|^2 \quad (36)$$ + +Combining the above results, yields a bound on the curvature of $\gamma$ + +$$ +\begin{align} +\kappa &= \frac{\|\ddot{\gamma} \times \dot{\gamma}\|}{\|\dot{\gamma}\|^3} \le \frac{\|\ddot{\gamma}\|}{\|\dot{\gamma}\|^2} \nonumber \\ +&\le \frac{\lambda_{\max}^G}{\lambda_{\min}^G} \sqrt{(\lambda_{\min}^H)^2 + (\lambda_{\min}^H)^2 + (\lambda_{\min}^H)^2}. \tag{37} +\end{align} +$$ \ No newline at end of file diff --git a/samples/texts/3747225/page_24.md b/samples/texts/3747225/page_24.md new file mode 100644 index 0000000000000000000000000000000000000000..d288c296dfaeb9fb562751bff0d21f62cce23c5a --- /dev/null +++ b/samples/texts/3747225/page_24.md @@ -0,0 +1,65 @@ +Therefore, the total variation of the tangential angle of $\gamma$ is bounded by + +$$ \text{TV}(\phi) = \int_{\gamma} \kappa ds \leq \max \kappa \cdot \int_{\Gamma} ds \leq \max \kappa \cdot D. \quad (38) $$ + +In the worst case, an iteration is required for every $\pi/2$ in TV($\phi$) to consistently cover the characteristic $\gamma$, which completes the proof. + +## REFERENCES + +ATI CTM Guide: Technical reference manual. Website: http://ati.amd.com/companyinfo/researcher/documents/ATI_CTM_Guide.pdf. + +NVIDIA CUDA: Compute unified device architecture. Website: http://developer.nvidia.com/cuda. + +SIGGRAPH 2007 GPGPU COURSE. Website: http://www.gpgpu.org/s2007. + +BORNEMANN, F. AND RASCH, C. 2006. Finite-element discretization of static Hamilton-Jacobi equations based on a local variational principle. *Computing and Visualization in Science* **9**, 2, 57–69. + +BRONSTEIN, A. M., BRONSTEIN, A. M., AND KIMMEL, R. 2006a. Calculus of non-rigid surfaces for geometry and texture manipulation. *IEEE Trans. Visualization and Comp. Graphics*. to appear. + +BRONSTEIN, A. M., BRONSTEIN, M. M., AND KIMMEL, R. 2006b. Generalized multidimensional scaling: a framework for isometry-invariant partial surface matching. *Proc. National Academy of Sciences* **103**, 5 (January), 1168–1172. + +CARR, N., HOBEROCK, J., CRANE, K., AND HART, J. Rectangular multi-chart geometry images. *Geometry Processing* **2006**. + +CHANG, K., BOWYER, K., AND FLYNN, P. 2003. Face recognition using 2D and 3D facial data. *ACM Workshop on Multimodal User Authentication*, 25–32. + +DANIELSSON, P.-E. 1980. Euclidean distance mapping. *Computer Graphics and Image Processing* **14**, 227–248. + +DUPUIS, P. AND OLIENSIS, J. 1994. Shape from shading: Provably convergent algorithms and uniqueness results. In *Proc. ECCV*. 259–268. + +ELAD, A. AND KIMMEL, R. 2001. Bending invariant representations for surfaces. In *Proc. CVPR*. 168–174. + +FISCHER, I. AND GOTSMAN, C. 2005. Fast approximation of high order Voronoi diagrams and distance transforms on the GPU. Technical report CS TR-07-05, Harvard University. + +FUNKHOUSER, T., KAZHDAN, M., SHILANE, P., MIN, P., KIEFER, W., TAL, A., RUSINKEWICZ, S., AND DOBKIN, D. 2004. Modeling by example. In *Proc. SIGGRAPH*. 652–663. + +GU, X., GORTLER, S., AND HOPPE, H. 2002. Geometry images. *ACM Transactions on Graphics* **21**, 3, 355–361. + +HERSHBERGER, J. AND SURI, S. 1999. An optimal algorithm for Euclidean shortest paths in the plane. *SIAM J. Computing* **28**, 6. + +HILAGA, M., SHINAGAWA, Y., KOMURA, T., AND KUNII, T. L. 2001. Topology matching for fully automatic similarity estimation of 3D shapes. In *Proc. SIGGRAPH*. 203–212. + +HOFF, K., CULVER, T., KEYSER, J., LIN, M., AND MANOCHA, D. 1999. Fast computation of generalized Voronoi diagrams using graphics hardware. In *Proc. ACM SIGGRAPH*. 277–286. + +JEONG, W.-K. AND WHITAKER, R. 2007. A fast eikonal equation solver for parallel systems. In *Proc. SIAM Conference on Computational Science and Engineering*. + +KATZ, S. AND TAL, A. 2004. Hierarchical mesh decomposition using fuzzy clustering and cuts. *ACM Trans. on Graphics* **22**, 3 (July), 954–961. + +KIMMEL, R. AND SETHIAN, J. A. 1998. Computing geodesic paths on manifolds. *Proc. of National Academy of Sciences* **95**, 15, 8431–8435. + +MÉMOLI, F. AND SAPIRO, G. 2001. Fast computation of weighted distance functions and geodesics on implicit hyper-surfaces. *Journal of Computational Physics* **173**, 1, 764–795. + +MÉMOLI, F. AND SAPIRO, G. 2005. A theoretical and computational framework for isometry invariant recognition of point cloud data. *Foundations of Computational Mathematics* **5**, 3, 313–347. + +MITCHELL, J. S. B., MOUNT, D. M., AND PAPADIMITRIOU, C. H. 1987. The discrete geodesic problem. *SIAM Journal of Computing* **16**, 4, 647–668. + +NOVOTNI, M. AND KLEIN, R. 2002. Computing geodesic distances on triangular meshes. In *Proc. Intl. Conf. in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG’2002)*. + +PEYRÉ, G. AND COHEN, L. 2003. Geodesic re-meshing and parameterization using front propagation. In *Proc. VLSM’03*. + +PRAUN, E., HOPPE, H., AND FINKELSTEIN, A. 1999. Robust mesh watermarking. In *Proc. SIGGRAPH*. 49–56. + +QIAN, J., ZHANG, Y., AND ZHAO, H. 2006. Fast sweeping methods for eikonal equations on triangulated meshes. *SIAM Journal on Numerical Analysis*. to appear. + +SETHIAN, J. AND POPOVICI, A. 2006. 3-d traveltime computation using the fast marching method. *Geophysics* **64**, 2, 516–523. + +ACM Transactions on Graphics, Vol. V, No. N, Month 20YY. \ No newline at end of file diff --git a/samples/texts/3747225/page_25.md b/samples/texts/3747225/page_25.md new file mode 100644 index 0000000000000000000000000000000000000000..bcf3c2d02e0fd6ec715a8551609d28560fa936f2 --- /dev/null +++ b/samples/texts/3747225/page_25.md @@ -0,0 +1,23 @@ +SETHIAN, J. A. 1996. A fast marching level set method for monotonically advancing fronts. *Proc. of National Academy of Sciences* **93**, 4, 1591–1595. + +SETHIAN, J. A. AND VLADIMIRSKY, A. 2000. Fast methods for the Eikonal and related Hamilton-Jacobi equations on unstructured meshes. *PNAS* **97**, 11, 5699–5703. + +SIGG, C., PEIKERT, R., AND GROSS, M. 2003. Signed distance transform using graphics hardware. In *Proc. IEEE Visualization*. 83–90. + +SLOAN, P.-P. J., ROSE, C. F., AND COHEN, M. F. 2001. Shape by example. In *ACM Symposium on Interactive 3D Graphics*. 133–144. + +SPIRA, A. AND KIMMEL, R. 2004. An efficient solution to the eikonal equation on parametric manifolds. *Interfaces and Free Boundaries* **6**, 4, 315–327. + +SUD, A., GOVINDARAJU, N., GAYLE, R., AND MANOCHA, D. 2006. Interactive 3D distance field computation using linear factorization. In *Proc. ACM Symposium on Interactive 3D Graphics and Games*. 117–124. + +SURAZHSKY, V., SURAZHSKY, T., KIRSANOV, D., GORTLER, S., AND HOPPE, H. 2005. Fast exact and approximate geodesics on meshes. In *Proc. SIGGRAPH*. 553–560. + +TSITSIKLIS, J. N. 1995. Efficient algorithms for globally optimal trajectories. *IEEE Transactions on Automatic Control* **40**, 9, 1528–1538. + +YING, L. AND CANDÈS, E. J. 2006. The phase flow method. *Journal of Computational Physics* **220**, 184–215. + +ZHAO, H. 2004. A fast sweeping method for eikonal equations. *Mathematics of computation* **74**, 250, 603–627. + +ZHOU, K., SNYDER, J., GUO, B., AND SHUM, H.-Y. 2004. Iso-charts: Stretch-driven mesh parameterization using spectral analysis. In *Symposium on Geometry Processing*. + +ZIGELMAN, G., KIMMEL, R., AND KIRYATI, N. 2002. Texture mapping using surface flattening via multi-dimensional scaling. *IEEE Trans. Visualization and computer graphics* **9**, 2, 198–207. \ No newline at end of file diff --git a/samples/texts/3747225/page_3.md b/samples/texts/3747225/page_3.md new file mode 100644 index 0000000000000000000000000000000000000000..52ab89a9a454de54f5207588e4eea05920198346 --- /dev/null +++ b/samples/texts/3747225/page_3.md @@ -0,0 +1,34 @@ +**Algorithm 1:** Fast marching method. + +**Input:** Numerical grid $\mathbf{U}$, set of source points $S \subset \mathbf{U}$ with the corresponding initial values $t(s)$ +**Output:** The distance map $t: \mathbf{U} \to \mathbb{R}^+$. + +**Initialization** + +1   $Q \leftarrow \emptyset$ +2   **foreach** point $u \in \mathbf{U} \setminus S$ **do** $t(u) \leftarrow \infty$ +3   **foreach** point $u \in S$ **do** +4      $Q \leftarrow Q \cup \mathcal{N}(u)$ +5   **end** + +**Iteration** + +6   **while** $Q \neq \emptyset$ **do** +7      $\mathbf{u} \leftarrow \text{ExtractMin}(Q)$ +8      $S \leftarrow S \cup \{\mathbf{u}\}$ +9   **foreach** point $v \in \mathcal{N}(u)$ **do** Update $(v)$ +10 **end** + +Fig. 1. A system of coordinates in the parametrization domain (left) and the corresponding local system of coordinates on the surface (right). + +## 2. EIKONAL EQUATION ON GEOMETRY IMAGES + +For the largest part of the discussion in this paper, we focus our attention on parametric two-dimensional manifolds, i.e. surfaces that can be represented by a single smooth mapping $\mathbf{x}: \mathbf{U} \to \mathbb{R}^3$, where $\mathbf{U} \subset \mathbb{R}^2$ is a parametrization domain. The topology of $\mathbf{U}$ depends on the topology of the surface. The derivatives + +$$\xi_i = \frac{\partial \mathbf{x}}{\partial u^i} \qquad (2)$$ + +with respect to the parametrization coordinates constitute a local system of coordinates on the surface (Figure 1). Distances on the surface are measured according to the differential arclength element, + +$$ds^2 = d\mathbf{u}^T G du, \qquad (3)$$ + +where $du = (du^1, du^2)$ and $G$ is a $2 \times 2$ *metric* matrix, whose elements are given by $g_{ij} = \xi_i^T \xi_j$. The local system of coordinates is *orthogonal* if and only if $G$ is diagonal (note that orthogonality of the coordinate system in the parametrization domain does not imply orthogonality of the coordinate system on the surface). \ No newline at end of file diff --git a/samples/texts/3747225/page_4.md b/samples/texts/3747225/page_4.md new file mode 100644 index 0000000000000000000000000000000000000000..a210a87c5f0eb578b1095b8149a34fa95fe80b71 --- /dev/null +++ b/samples/texts/3747225/page_4.md @@ -0,0 +1,33 @@ +A distance map on the surface is computed by solving the eikonal equation, expressed in our notation as + +$$ +\|\nabla_G t\|^2 = \nabla_{\mathbf{u}}^{\mathrm{T}} G(\mathbf{u})^{-1} \nabla_{\mathbf{u}} t = 1 \quad (4) +$$ + +on a discrete grid obtained by sampling the parametrization domain $\mathbf{U}$. A special case, usually referred to as a *geometry image*, is obtained by discretizing the parametrization domain on a regular Cartesian grid with equal steps, which for convenience are henceforth assumed to be 1 in each direction. The origin of the term stems from the fact that the surface can be represented as tree matrices holding the coordinates of the sampled surface $\mathbf{x}(\mathbf{U})$. + +In a geometry image, a grid point $\mathbf{u}_0$ can be connected to its neighbors $\mathbf{u}_0 + \mathbf{m}$ according some grid connectivity. The simplest grid connectivity is based on four neigbors: $\mathbf{m} = (\pm 1, 0)^T$, $(0, \pm 1)^T$. Another possible grid connectivity is the eight-neigbor connectivity, where $\mathbf{m} = (\pm 1, 0)^T$, $(0, \pm 1)^T$, $(\pm 1, \pm 1)^T$, $(\pm 1, \mp 1)^T$. The former two grid connectivity patterns create four and eight triangles, respectively, supporting the grid point $\mathbf{u}_0$. Let us examine a triangle created by $\mathbf{x}_0 = \mathbf{x}(\mathbf{u}_0)$, $\mathbf{x}_1 = \mathbf{x}(\mathbf{u}_0 + \mathbf{m}_1)$, and $\mathbf{x}_2 = \mathbf{x}(\mathbf{u}_0 + \mathbf{m}_2)$; without loss of generality we will henceforth assume that $\mathbf{x}_0 = 0$. In local coordinates, we can write + +$$ +\mathbf{x}_i = \mathbf{x}_0 + m_i^1 \xi_1 + m_i^2 \xi_2, \tag{5} +$$ + +or $\mathbf{X} = \mathbf{T}\mathbf{M}$, where $\mathbf{X} = (\mathbf{x}_1, \mathbf{x}_2)$, $\mathbf{T} = (\xi_1, \xi_2)$, and $\mathbf{M} = (\mathbf{m}_1, \mathbf{m}_2)$. The matrix $\mathbf{E} = \mathbf{M}^\mathrm{T}\mathbf{G}\mathbf{M}$ describes the geometry of the triangle. If $e_{12} > 0$ is positive, the angle $\langle \mathbf{x}_1\mathbf{x}_0\mathbf{x}_2 \rangle$ on the surface is acute. + +3. UPDATE STEP + +The fast marching algorithm can be formulated for parametric surfaces as shown in [Spira and Kimmel 2004]. All computations are performed on the grid in the parametrization domain, though the distances are computed with respect to the surface metric **G**. In the numerical core of this algorithm lies the update step, which given a grid point **u**₀ and the times of arrival of its neighbors, computes the time of arrival *t*(**u**₀). Since **u**₀ is shared by several triangles (the exact number of triangles depends on the grid connectivity), *t*(**u**₀) is computed in each triangle and the smallest value is selected to update the time of arrival at **u**₀. + +Let $\mathbf{u}_0$ be updated from its two neighbors $\mathbf{u}_1 = \mathbf{u} + \mathbf{m}_1$ and $\mathbf{u}_2 = \mathbf{u}_0 + \mathbf{m}_2$, whose times of arrival are $t_1 = t(\mathbf{u}_0 + \mathbf{m}_1)$ and $t_2 = t(\mathbf{u}_0 + \mathbf{m}_2)$. We denote $\mathbf{x}_i = \mathbf{x}(u_i)$ and assume without loss of generality that $\mathbf{x}_0 = 0$. Our goal is to compute $t_0 = t(u_0)$ based on $t_1$, $t_2$ and the geometry of the triangle $x_1x_0x_2$. The update of $\mathbf{x}_0$ has to obey the following properties: + +(1) Consistency: $t_0 > \max\{t_1, t_2\}$. + +(2) Monotonicity: an increase of $t_1$ or $t_2$ increases $t_0$. + +(3) Upwinding: the update has to be accepted only from a triangle containing the characteristic direction (characteristics of the eikonal equation coincide with minimum geodesics on the surface). + +(4) Numerical stability: a small perturbation in $t_1$ or $t_2$ results in a bounded perturbation in $t_0$. + +3.1 Planar wavefront approximation + +In the original fast marching algorithm, a vertex is updated by simulating a planar wavefront propagating inside the triangle [Kimmel and Sethian 1998]; the values of the two supporting vertices allow to compute the front direction. The same update scheme was used in [Spira and Kimmel 2004]. Here, we develop a similar scheme, expressing it more compactly and without the use of trigonometric functions, which allow more efficient computation. We model the wavefront as a planar wave propagating from a virtual planar source described by the equation $\mathbf{n}^{\mathrm{T}}\mathbf{x} + p = 0$, where \ No newline at end of file diff --git a/samples/texts/3747225/page_5.md b/samples/texts/3747225/page_5.md new file mode 100644 index 0000000000000000000000000000000000000000..c6fa9af3cf8a2a8b9e744edf8c5ca58f28c4bb8a --- /dev/null +++ b/samples/texts/3747225/page_5.md @@ -0,0 +1,34 @@ +Fig. 2. Update schemes based on the planar (left) and spherical (right) wavefront propagation models. + +**n** is the propagation direction (Figure 2). Demanding that the supporting vertices $\mathbf{x}_1, \mathbf{x}_2$ of the triangle lie at distances $t_1$ and $t_2$, respectively, from the source, we obtain + +$$ \mathbf{X}^{\mathrm{T}}\mathbf{n} + p \cdot \mathbf{1} = \mathbf{t}, \qquad (6) $$ + +where $\mathbf{X}$ is a matrix whose columns are $\mathbf{x}_1$ and $\mathbf{x}_2$, $\mathbf{1} = (1, 1)^{\mathrm{T}}$, and $\mathbf{t} = (t_1, t_2)^{\mathrm{T}}$. The wavefront time of arrival to the updated vertex $\mathbf{x}_0$ is given by its distance from the planar source, + +$$ t_0 = \mathbf{n}^{\mathrm{T}}\mathbf{x}_0 + p = p. \qquad (7) $$ + +Assuming that the mesh is non-degenerate, $\mathbf{x}_1$ and $\mathbf{x}_2$ are linearly independent, and we can solve (6) for **n**, obtaining + +$$ \mathbf{n} = \mathbf{X}(\mathbf{X}^{\mathrm{T}}\mathbf{X})^{-1}(\mathbf{t} - p \cdot \mathbf{1}). \qquad (8) $$ + +Invoking the condition $||\mathbf{n}|| = 1$ yields + +$$ +\begin{aligned} +1 &= \mathbf{n}^{\mathrm{T}}\mathbf{n} \\ +&= (\mathbf{t} - p \cdot \mathbf{1})^{\mathrm{T}}(\mathbf{X}^{\mathrm{T}}\mathbf{X})^{-\mathrm{T}}\mathbf{X}^{\mathrm{T}}\mathbf{X}(\mathbf{X}^{\mathrm{T}}\mathbf{X})^{-1}(\mathbf{t} - p \cdot \mathbf{1}) \\ +&= (\mathbf{t} - p \cdot \mathbf{1})^{\mathrm{T}}(\mathbf{X}^{\mathrm{T}}\mathbf{X})^{-1}(\mathbf{t} - p \cdot \mathbf{1}) \\ +&= p^2 \cdot \mathbf{1}^{\mathrm{T}}\mathbf{Q}\mathbf{1} - 2p \cdot \mathbf{1}^{\mathrm{T}}\mathbf{Q}\mathbf{t} + \mathbf{t}^{\mathrm{T}}\mathbf{Q}\mathbf{t}, +\end{aligned} +\qquad (9) $$ + +where $\mathbf{Q} = (\mathbf{X}^\mathrm{T}\mathbf{X})^{-1} = \mathbf{E}^{-1}$. Hence, $t_0$ can be found as the largest solution of the quadratic equation + +$$ t_0^2 \cdot \mathbf{1}^\mathrm{T}\mathbf{Q}\mathbf{1} - 2t_0 \cdot \mathbf{1}^\mathrm{T}\mathbf{Q}\mathbf{t} + \mathbf{t}^\mathrm{T}\mathbf{Q}\mathbf{t} - 1 = 0 \qquad (10) $$ + +(the smallest solution corresponds to the opposite propagation direction, where the wavefront arrives to $\mathbf{x}_0$ before it arrives to $\mathbf{x}_1$ and $\mathbf{x}_2$ and therefore has to be discarded). To speed the solution up, the terms $\mathbf{1}^\mathrm{T}\mathbf{Q}\mathbf{1}$ and $\mathbf{1}^\mathrm{T}\mathbf{Q}$ depending on the grid geometry only are pre-computed. + +The consistency condition can be written as $p \cdot \mathbf{1} > \mathbf{X}^\mathrm{T}\mathbf{n} + p \cdot \mathbf{1}$ or simply $\mathbf{X}^\mathrm{T}\mathbf{n} < 0$, which can be interpreted geometrically as a demand that the direction $-\mathbf{n}$ must form acute angles with the triangle edges. In order to impose monotonicity, we demand that + +$$ \nabla_t t_0 = \left( \frac{\partial t_0}{\partial t_1}, \frac{\partial t_0}{\partial t_2} \right)^{\mathrm{T}} > 0. \qquad (11) $$ \ No newline at end of file diff --git a/samples/texts/3747225/page_6.md b/samples/texts/3747225/page_6.md new file mode 100644 index 0000000000000000000000000000000000000000..426be240c25f049ec9016d1ceb99766c377ea5db --- /dev/null +++ b/samples/texts/3747225/page_6.md @@ -0,0 +1,35 @@ +Differentiating (10) with respect to **t**, we obtain + +$$ t_0 \cdot \nabla_t t_0 \cdot \mathbf{1}^\mathrm{T} \mathbf{Q} \mathbf{1} - \nabla_t t_0 \cdot \mathbf{1}^\mathrm{T} \mathbf{Q} \mathbf{t} - t_0 \cdot \mathbf{Q} \mathbf{1} + \mathbf{Q} \mathbf{t} = 0, \quad (12) $$ + +from where + +$$ \nabla_t t_0 = \frac{\mathbf{Q}(t-\mathbf{p}\cdot\mathbf{1})}{\mathbf{1}^\mathrm{T}\mathbf{Q}(t-\mathbf{p}\cdot\mathbf{1})}. \quad (13) $$ + +Substituting (8), we can write + +$$ \mathbf{Q}(t-\mathbf{p}\cdot\mathbf{1}) = (\mathbf{X}^\mathrm{T}\mathbf{X})^{-1}\mathbf{X}^\mathrm{T}\mathbf{n} = \mathbf{Q}\mathbf{X}^\mathrm{T}\mathbf{n}. \quad (14) $$ + +Observe that the monotonicity condition $\nabla_t t_0 > 0$ is satisfied when either $\mathbf{Q}\mathbf{X}^\mathrm{T}\mathbf{n} > 0$, or $\mathbf{Q}\mathbf{X}^\mathrm{T}\mathbf{n} < 0$, that is, both coordinates of $\mathbf{Q}\mathbf{X}^\mathrm{T}\mathbf{n}$ have the same sign. However, since the consistency of the solution requires $\mathbf{X}^\mathrm{T}\mathbf{n}$ to be negative, and $\mathbf{Q}$ is positive semi-definite, $\mathbf{Q}\mathbf{X}^\mathrm{T}\mathbf{n}$ cannot have both coordinates positive. We therefore conclude that the solution has to satisfy $\mathbf{Q}\mathbf{X}^\mathrm{T}\mathbf{n} = \mathbf{Q}(t-\mathbf{p}\cdot\mathbf{1}) < 0$. This yields $\mathbf{1}^\mathrm{T}\mathbf{Q}(t-\mathbf{p}\cdot\mathbf{1}) < 0$. The latter condition can be rewritten as + +$$ 0 > \mathbf{Q}(t-\mathbf{p}\cdot\mathbf{1}) = (\mathbf{X}^{\mathrm{T}}\mathbf{X})^{-1}\mathbf{X}^{\mathrm{T}}\mathbf{n}, \quad (15) $$ + +where the inequality is interpreted coordinate-wise. Observe that the rows of the matrix $(\mathbf{X}^\mathrm{T}\mathbf{X})^{-1}\mathbf{X}^\mathrm{T}$ are orthogonal to $\mathbf{x}_1$, $\mathbf{x}_2$, or in other words, are normal to the triangle edges. This gives the following geometric interpretation of the monotonicity condition: the direction $-n$ must come from within the triangle. Since the update direction also obeys the consistency condition, any direction coming from within the triangle must form acute angles with the triangle edges, leading to the demand that the angle $\langle\mathbf{x}_1\mathbf{x}_0\mathbf{x}_2\rangle$ is acute (or, equivalently, $e_{12} > 0$). + +Consistency and monotonicity conditions should guarantee that the update is performed only from a triangle that contains the characteristic direction, which makes the update scheme upwind [Sethian and Vladimirsky 2000]. However, since **n** is only an approximation of the characteristic direction, it may happen that the conditions are not satisfied although the true characteristic lies inside the triangle. For a sufficiently small triangle, this can happen only if any of the two inner products $\mathbf{n}^\mathrm{T}\mathbf{x}_1$, $\mathbf{n}^\mathrm{T}\mathbf{x}_2$ is sufficiently close to zero. This corresponds to the situation in which $t_0$ can be updated from one of the triangle edges (one-dimensional simplices) $\mathbf{x}_0\mathbf{x}_1$, $\mathbf{x}_0\mathbf{x}_2$. In this case, the simple Dijkstra-type update, + +$$ t_0 = \min\{t_1 + \|x_1\|, t_2 + \|x_2\|\}, \quad (16) $$ + +is performed. + +In order to ensure that the update formula is numerically stable, we assume that $t_i$ is affected by a small error $\varepsilon$, which, in turn, influences the computed time of arrival $t_0$. Using first-order Taylor expansion, we have + +$$ \tilde{t}_0 \approx t_0 + \varepsilon \cdot \frac{\partial t_0}{\partial t_i} \le t_0 + \varepsilon \cdot \left( \left| \frac{\partial t_0}{\partial t_i} \right| + \left| \frac{\partial t_0}{\partial t_2} \right| \right). \quad (17) $$ + +Under the monotonicity condition $\nabla_t t_0 > 0$, we can write + +$$ \tilde{t}_0 \approx t_0 + \varepsilon \cdot \mathbf{1}^\mathrm{T} \nabla_t t_0 = t_0 + \varepsilon \cdot \frac{\mathbf{1}^\mathrm{T} \mathbf{Q} (\mathbf{t}-\mathbf{p} \cdot \mathbf{1})}{\mathbf{1}^\mathrm{T} \mathbf{Q} (\mathbf{t}-\mathbf{p} \cdot \mathbf{1})} = t_0 + \varepsilon. \quad (18) $$ + +The error in $t_0$ is also bounded in the one-dimensional Dijkstra-type update, which makes the update formula stable. + +The planar wavefront update scheme is summarized in Algorithm 2. Note that it is valid only for acute triangulations; when some triangles have obtuse angles ($e_{12} < 0$), they have to be split by adding connections to additional neighbor grid points, as proposed by Spira and Kimmel in [2004]. \ No newline at end of file diff --git a/samples/texts/3747225/page_7.md b/samples/texts/3747225/page_7.md new file mode 100644 index 0000000000000000000000000000000000000000..6f9380fbda8072753403fb1b212d1cb53fce0784 --- /dev/null +++ b/samples/texts/3747225/page_7.md @@ -0,0 +1,45 @@ +**Algorithm 2:** Planar update scheme for acute triangulation. + +1. Set $t_i^{\text{new}} \leftarrow t_0$. + +2. **foreach** triangle $X = (\mathbf{x}_1, \mathbf{x}_2)^\mathrm{T}$ **do** + +3. Solve the quadratic equation (10) for $t_0$. + +4. **if** $Q(t-t_0 \cdot I) > 0$ or $t_0 < \max\{t(\mathbf{x}_1), t(\mathbf{x}_2)\}$ **then** compute $t_0$ according to (16). + +5. Set $t_0^{\text{new}} \leftarrow \min\{t_0^{\text{new}}, t_0\}$. + +6. **end** + +## 3.2 Spherical wavefront approximation + +A different update scheme was proposed by Novotni and Klein [Novotni and Klein 2002]. They update a vertex with its Euclidean distance from a virtual point source, whose coordinates are estimated from the times of arrival to the two supporting vertices. This approach is similar in its spirit to the Mitchel-Mount-Papadimitriou algorithm [Mitchell et al. 1987; Surazhsky et al. 2005], and should apparently be more accurate that its planar counterpart for computing distance maps from point sources. Here, we show that this scheme can be inconsistent and numerically unstable. + +According to Novotni and Klein, the wavefront is modeled as a spherical (circular) wave propagating from a virtual point source $\mathbf{x}$ (Figure 2, right). Demanding that the supporting vertices $\mathbf{x}_1, \mathbf{x}_2$ of the triangle lie at distances $t_1$ and $t_2$, respectively, from the source, we obtain for $i = 1, 2$ + +$$t_i^2 = (\mathbf{x}_i - \mathbf{x})^\mathrm{T} (\mathbf{x}_i - \mathbf{x}) = \mathbf{x}_i^\mathrm{T} \mathbf{x}_i - 2\mathbf{x}_i^\mathrm{T} \mathbf{x} + \mathbf{x}^\mathrm{T} \mathbf{x}. \quad (19)$$ + +The time of arrival of the wavefront to the updated vertex $\mathbf{x}_0$ is given by its distance from the point source, + +$$t_0^2 = (\mathbf{x}_0 - \mathbf{x})^\mathrm{T} (\mathbf{x}_0 - \mathbf{x}) = \mathbf{x}^\mathrm{T} \mathbf{x}. \quad (20)$$ + +Denoting $s_i = t_i^2$, and $\mathbf{q} = (s_1 - \mathbf{x}_1^\mathrm{T}\mathbf{x}_1, s_2 - \mathbf{x}_2^\mathrm{T}\mathbf{x}_2)^\mathrm{T}$, we obtain + +$$s_0 \cdot \mathbf{1} - 2\mathbf{X}^\mathrm{T}\mathbf{x} = \mathbf{q}. \quad (21)$$ + +Assuming the mesh to be non-degenerate, + +$$\mathbf{x} = \frac{1}{2}\mathbf{X}(\mathbf{X}^\mathrm{T}\mathbf{X})^{-1}(s_0 \cdot \mathbf{1} - \mathbf{q}) = \frac{1}{2}\mathbf{X}\mathbf{Q}(s_0 \cdot \mathbf{1} - \mathbf{q}). \quad (22)$$ + +Plugging the later result into (20), we have + +$$\begin{aligned} s_0 &= \mathbf{x}^\mathrm{T}\mathbf{x} = \frac{1}{4}(s_0 \cdot \mathbf{1} - \mathbf{q})^\mathrm{T}\mathbf{Q}(s_0 \cdot \mathbf{1} - \mathbf{q}) \\ &= \frac{1}{4}(s_0^2 \cdot \mathbf{1}^\mathrm{T}\mathbf{Q}\mathbf{1} - 2s_0 \cdot \mathbf{1}^\mathrm{T}\mathbf{Q}\mathbf{q} + \mathbf{q}^\mathrm{T}\mathbf{Q}\mathbf{q}). \end{aligned} \quad (23)$$ + +Consequently, $t_0$ is given as the largest positive solution of the following bi-quadratic equation + +$$t_0^4 \cdot \mathbf{1}^\mathrm{T}\mathbf{Q}\mathbf{1} - 2t_0^2 (\mathbf{1}^\mathrm{T}\mathbf{Q}\mathbf{q} + 2) + \mathbf{q}^\mathrm{T}\mathbf{Q}\mathbf{q} = 0. \quad (24)$$ + +In order to enforce consistency, we require + +$$\begin{align*} s_0 &> s_i = (\mathbf{x}_i - \mathbf{x})^\mathrm{T} (\mathbf{x}_i - \mathbf{x}) = \mathbf{x}_i^\mathrm{T} \mathbf{x}_i - 2\mathbf{x}_i^\mathrm{T} \mathbf{x} + \mathbf{x}^\mathrm{T} \mathbf{x} \\ &= \mathbf{x}_i^\mathrm{T} (\mathbf{x}_i - 2\mathbf{x}) + s_0, \end{align*} \quad (25)$$ \ No newline at end of file diff --git a/samples/texts/3747225/page_8.md b/samples/texts/3747225/page_8.md new file mode 100644 index 0000000000000000000000000000000000000000..3db2c540218e3d711b3d62b3a336353e8a5d40a6 --- /dev/null +++ b/samples/texts/3747225/page_8.md @@ -0,0 +1,40 @@ +Fig. 3. Consistency and monotonicity conditions of the spherical update scheme require that the virtual source lie inside the region shaded in red. Some update directions coming from within the triangle are outside that region. + +or, alternatively, + +$$ +\mathbf{x}_i^T \left( \mathbf{x} - \frac{1}{2} \mathbf{x}_i \right) > 0. \tag{26} +$$ + +The geometric interpretation of the former condition is that the source point **x** lies at the “positive” sides of the two +perpendicular bisectors to the edges **x**₀**x**₁ and **x**₀**x**₂. + +To enforce monotonicity, we differentiate (24) with respect to s = (s₁, s₂)ᵀ, + +$$ +2s_0 \cdot \nabla_s s_0 \cdot \mathbf{1}^\mathrm{T} \mathbf{Q} \mathbf{1} - 2\nabla_s s_0 \cdot (\mathbf{1}^\mathrm{T} \mathbf{Q} \mathbf{q} + 2) - 2s_0 \cdot \mathbf{Q} \mathbf{1} + 2\mathbf{Q}\mathbf{q} = 0, \quad (27) +$$ + +from where + +$$ +\nabla_s s_0 = \frac{\mathbf{Q}(s_0 \cdot \mathbf{1} - \mathbf{q})}{\mathbf{1}^\mathrm{T}\mathbf{Q}(s_0 \cdot \mathbf{1} - \mathbf{q}) - 2}. \quad (28) +$$ + +Requiring $\nabla_s s_0 > 0$ in conjunction with the consistency condition yields $\mathbf{Q}(s_0 \cdot \mathbf{1} - \mathbf{q}) > 0$, or + +$$ +\mathbf{Q}\mathbf{X}^{\mathrm{T}}\mathbf{x}>0, +\quad +(29) +$$ + +which can be interpreted geometrically as a demand that **x** lies inside the angle $\langle \mathbf{x}_1\mathbf{x}_0\mathbf{x}_2 \rangle$. + +Figure 3 shows that some characteristic directions lying inside the triangle violate consistency or monotonicity. There- +fore, the spherical wavefront update scheme is likely to introduce errors that will propagate with the computed front. +Also note that unlike its planar counterpart, the spherical wavefront propagation model is not numerically stable. Ob- +serve that ||∇s s0|| > 1 for any x, and for x lying on the edge x₁x₂, the gradient is infinite, meaning that roundoff errors +can potentially explode, invalidating the numerical solution. + +These disadvantages of the spherical wavefront scheme become less pronounced for $t \gg \ell$, where $\ell$ is the largest triangle edge length. However, for large values of the arrival times, the spherical and the planar models converge and produce nearly identical solutions. Due to these difficulties, in this paper we use the planar wavefront update scheme. \ No newline at end of file diff --git a/samples/texts/3747225/page_9.md b/samples/texts/3747225/page_9.md new file mode 100644 index 0000000000000000000000000000000000000000..ff3761fc9c191cbac5608ce4eab7e914e5cc29b5 --- /dev/null +++ b/samples/texts/3747225/page_9.md @@ -0,0 +1,42 @@ +4. RASTER SCAN ALGORITHM + +One of the disadvantages of the fast marching algorithm is that it is inherently sequential, thus allowing no paralleliza- +tion. In addition, the order of visiting the grid points depend on the shape of the propagating wavefront and is therefore +data-dependent. This results in irregular memory access that is unlikely to utilize the caching system efficiently. These +drawbacks call for searching for alternative grid traversal orders. + +In his classical paper, Danielsson [1980] observed that since the geodesics on the Euclidean plane are straight lines, all possible characteristic directions of the eikonal equation fall into one of the four quadrants of a Cartesian grid and can be therefore covered by traversing the grid in four directed raster scans. Danielsson's raster scan spirit (commonly referred to as *fast sweeping*) was adopted in [Zhao 2004] for solving the eikonal equation on weighted Euclidean domains, and in Bornemann and Rasch [Bornemann and Rasch 2006]; similar ideas date back to Dupuis and Oliensis' studies on shape from shading [Dupuis and Oliensis 1994]. + +Raster scan traversal has linear complexity in the grid size, and is characterized by regular access to memory, which increases the efficiency of caching. Since the order of visiting of the grid points is independent of the data and is known in advance, one can use the pre-caching mechanism, supported in many modern processors. In addition, unlike its priority queue-based counterpart, raster scan can be efficiently parallelized as will be shown in Section 5. + +Here, we use the raster scan order to traverse the Cartesian grid in the surface parametrization domain, as summarized +in Algorithm 3. As in the priority queue-based traversal order, all computations are done in the parametrization domain, +taking into account the metric on the surface. Since each directed raster scan covers only 90° of possible characteristic +directions, the update of a point on the grid can be done only from the triangles containing that direction. For example, +if the eight-neighbor grid connectivity is used, only two triangles formed by three neighbors are absolutely required in +the update (Figure 4, first row). + +Observe that unlike the Euclidean case where the characteristics are straight lines, on a general geometry image, the +characteristics in the parametrization domain are usually curved. This implies that the four raster scans may cover +only a part of a characteristic, and have to be repeated more times in order to produce a consistent distance map. As +a consequence, the complexity of the raster algorithm for geometry images is $O(N_{\text{iter}} \cdot n)$, where $n$ is the grid size, +and $N_{\text{iter}}$ is the data-dependent number of iterations. In what follows, we present a bound on the maximum number of +iterations required before the algorithm stops. + +PROPOSITION 1. Algorithm 3 applied to a geometry image **x**(**U**) will stop after at most + +$$ +N_{\text{iter}} \leq \left[ \frac{2D \lambda_{\min}^{G}}{\pi \lambda_{\min}^{G}} \sqrt{( \lambda_{\min}^{H_1} )^2 + ( \lambda_{\min}^{H_2} )^2 + ( \lambda_{\min}^{H_3} )^2} \right] + 1 +$$ + +iterations, where D is the surface diameter, $\lambda_{\min}^H$ is the smallest eigenvalue of the Hessian matrix $H = \nabla_{uu}^2 x^i$ of $x^i$ with respect to the parametrization coordinates $\mathbf{u}$, and $\lambda_{\max}^G / \lambda_{\min}^G$ is the condition number of the metric $\mathbf{G}$. + +For proof, see Appendix A. When the surface is given as a graph of a function $z(x,y)$, the bound can be simplified as + +$$ +N_{\text{iter}} \leq \left[ \frac{2D \lambda_{\text{max}}^{G}}{\pi \lambda_{\text{min}}^{G}} \lambda_{\text{min}}^{\mathbf{H}} \right] + 1, \quad (30) +$$ + +where $\mathbf{H} = \nabla^2 z$. + +The main significance of this bound is that the maximum number of iterations does not depend on the discretization of U and is a constant regardless of the grid size. Note, however, that the bound depends both on the properties of the surface expressed in terms of the metric G and the diameter D, and those of the parametrization expressed in terms \ No newline at end of file diff --git a/samples/texts/3931795/page_1.md b/samples/texts/3931795/page_1.md new file mode 100644 index 0000000000000000000000000000000000000000..e5216fa89dbcb3c0f4688ee6f4c292dd961f3f09 --- /dev/null +++ b/samples/texts/3931795/page_1.md @@ -0,0 +1,32 @@ +AN EXACT RENORMALIZATION FORMULA FOR THE +MARYLAND MODEL + +ALEXANDER FEDOTOV¹ AND FEDOR Sandomirskiy²,¹ + +**ABSTRACT.** We discuss the difference Schrödinger equation $\psi_{k+1} + \psi_{k-1} + \lambda \cot(\pi\omega k + \theta)\psi_k = E\psi_k$, $k \in \mathbb{Z}$, where $\lambda, \omega, \theta$ and $E$ are parameters. We obtain explicit renormalization formulas relating its solutions for large $|k|$ to solutions of the equation with new parameters $\lambda, \omega, \theta$ and $E$ for bounded $|k|$. These formulas are similar to the renormalization formulas from the theory of Gaussian exponential sums. + +# 1. INTRODUCTION + +We consider the difference Schrödinger equation + +$$ \psi_{k+1} + \psi_{k-1} + \lambda \cot(\pi(\omega k + \theta))\psi_k = E\psi_k, \quad k \in \mathbb{Z}, \qquad (1.1) $$ + +where $\omega \in (0,1) \setminus \mathbb{Q}$, $\theta \in [0,1)$, $\lambda > 0$ and $E \in \mathbb{R}$ are parameters; $E$ is called the spectral parameter. + +The Schrödinger operator in $l^2(\mathbb{Z})$ corresponding to (1.1) is referred to as the Maryland model. It is one of the popular models of spectral theory [4, 10]: being a non-trivial almost periodic operator, many of its important spectral properties can be explicitly described. There are interesting open problems related to the behavior of solutions of (1.1) for large $|k|$. For example, one can mention the study of the spectrum of the Maryland model for frequencies that are neither well no badly approximable by rational numbers, e.g. [10], the investigation of the multiscale behavior of its (generalized) eigenfunctions, and the explanation of the time evolution generated by the Maryland model, e.g. [11]. + +In this paper, for sake of brevity we call (1.1) the Maryland equation. + +The central result of this paper is a renormalization formula expressing solutions of (1.1) in terms of solutions of the Maryland equation with new parameters $\omega, \theta, \lambda, E$ for smaller $|k|$. This formula is similar to the well-known renormalization formula from the theory of Gaussian exponential sums, see, for example, [9]. + +2010 Mathematics Subject Classification. 39A45, 82B44 (Primary), 11L03 (Secondary). + +Key words and phrases. Maryland model, Gaussian exponential sum, renormalization formulas, monodromy matrix, minimal meromorphic solution. + +¹ Department of Mathematical Physics, St. Petersburg State University, Ulianovskaja, 1, +St. Petersburg-Petrodvoretz, 198904 Russia, E-mail: fedotov.s@mail.ru + +² Chebyshev Laboratory, St. Petersburg State University, 14th Line, 29b, Vasilyevsky Island, +St. Petersburg, 199178 Russia, E-mail: sandomirski@yandex.ru + +The work of the first author is supported by the Russian Foundation of Basic Research under grant 11-01-00458-a; the work of the second one is supported by the Chebyshev Laboratory (St. Petersburg State University) under RF Government grant 11.G34.31.0026 and by JSC “Gazprom Neft”. \ No newline at end of file diff --git a/samples/texts/3931795/page_10.md b/samples/texts/3931795/page_10.md new file mode 100644 index 0000000000000000000000000000000000000000..0db2e3bed80c2df08f81f9638dd6a0812585c058 --- /dev/null +++ b/samples/texts/3931795/page_10.md @@ -0,0 +1,66 @@ +explicit transformation of the complex Maryland equation coefficients proved in +this paper looks to be a very rare phenomenon. + +3.2.3. Renormalization formula for the Maryland equation. For $\eta \notin \omega\mathbb{Z}$, Theorem 1.1 immediately follows Theorems 3.1 and 3.2. As the renormalization formula (1.5) is an equality of two functions analytic in $\eta$, the statement of Theorem 1.1 remains valid also for $\eta \in \omega\mathbb{Z}$. + +4. CONSTRUCTION OF THE MINIMAL MEROMORPHIC SOLUTION + +In this section, we construct a minimal meromorphic solution of the complex +Maryland equation (1.8). First, we describe a solution analytic in $C \setminus R$. Next, we +check that it can be continued to a meromorphic function. Then, we compute the +asymptotics of this function for $\operatorname{Im} z \to \pm\infty$ and show that it is minimal. Finally, +we prove Theorem 2.1. + +Below, *C* denotes different positive constants (independent of *z*), and *K**C* = {*z* ∈ C : |Im *z*| ≥ *C*|Re *z*|}. + +**4.1. Solution analytic in C \ R.** We begin with a short description of a special function we use in this section. + +4.1.1. σ-function. An analogous function was introduced and systematically studied in the diffraction theory [1]. Later it appeared in other domains, e.g. [5] and [3]. Below, we rely on the last paper. + +The special function $\sigma$ can be uniquely defined as a meromorphic solution of the +difference equation + +$$ +\sigma(z + \pi\omega) = (1 + e^{-iz})\sigma(z - \pi\omega) \quad (4.1) +$$ + +which is analytic in the strip $S = \{z \in \mathbb{C} : |\text{Re} z| < \pi(1+\omega)\}$, does not vanish +there and admits in $S$ the following uniform asymptotics: + +$$ +\sigma(z) = 1 + o(1), \quad \text{Im } z \to -\infty, \tag{4.2} +$$ + +$$ +\sigma(z) = e^{-\frac{iz^2}{4\pi\omega} + \frac{i\pi}{12\omega} + \frac{i\pi\omega}{12}} (1 + o(1)), \quad \text{Im } z \to \infty. \quad (4.3) +$$ + +The asymptotics (4.2) and (4.3) appear to be uniform in $K_C$ for any fixed $C$. The poles of $\sigma$ are located at the points + +$$ +z = -( \pi(1 + \omega) + 2\pi\omega k + 2\pi m ), \quad k, m \in \mathbb{N} \cup \{0\}, \qquad (4.4) +$$ + +and its zeros are described by the formulas + +$$ +z = \pi(1 + \omega) + 2\pi\omega k + 2\pi m, \quad k, m \in \mathbb{N} \cup \{0\}; \tag{4.5} +$$ + +the zero at $z = \pi(1 + \omega)$ and the pole at $z = \pi(1 + \omega)$ are simple. We note that + +$$ +\mathrm{res}_{z=\pi(1+\omega)} \frac{1}{\sigma(z)} = -\sqrt{\omega} e^{\frac{i\pi}{12\omega} + \frac{i\pi\omega}{12} + \frac{i\pi}{4}}. +$$ + +The $\sigma$ function solves one more difference equation + +$$ +\sigma(z + \pi) = (1 + e^{-iz/\omega})\sigma(z - \pi) \quad (4.6) +$$ + +and satisfies the relations + +$$ +\sigma(z) = e^{-\frac{i}{4\pi\omega}z^2 + \frac{i\pi}{12\omega} + \frac{i\pi\omega}{12}} / \sigma(-z) \quad \text{and} \quad \overline{\sigma(\bar{z})} = 1 / \sigma(-z). \quad (4.7) +$$ \ No newline at end of file diff --git a/samples/texts/3931795/page_11.md b/samples/texts/3931795/page_11.md new file mode 100644 index 0000000000000000000000000000000000000000..5d007514a30400338ac5895b2b67913f8d930444 --- /dev/null +++ b/samples/texts/3931795/page_11.md @@ -0,0 +1,61 @@ +4.1.2. Analytic solution of the complex Maryland equation. Here, we always assume that $Im z \neq 0$. Instead of $E$ and $\lambda$, we use the parameters $\eta \in \mathbb{R}$ and $l > 0$, see (1.2). It is convenient to consider $|\eta| < \pi + \omega$. We construct a solution represented by a contour integral, and we begin by describing the contour. + +Put + +$$ +\begin{align*} +C' = C \setminus & ((\eta - il - \pi(1+\omega) - \mathbb{R}_+) \cup (-\eta - il + \pi(1+\omega) + \mathbb{R}_+) \cup \\ +& \cup (-\eta + il - \pi(1+\omega) - \mathbb{R}_+) \cup (\eta + il + \pi(1+\omega) + \mathbb{R}_+)). +\end{align*} +$$ + +For $z \in \mathbb{C}$, denote by $D(z)$ the set of rays going in $\mathbb{C}$ to infinity in parallel to the vectors corresponding to the complex numbers + +$$ +\tau = e^{i\alpha}, \quad \alpha \in (-\arg z, -\arg z + \pi); \quad -\pi < \arg z < \pi. \tag{4.8} +$$ + +We assume that $\gamma = \gamma(z)$ is a curve in $\mathbb{C}'$ and that, first, it goes from $-i\infty$ to $-2il$ along a ray of $D(z)$, then, it goes from $-2il$ to $2il$ along $i\mathbb{R}$ and, finally, it goes to $+i\infty$ along one more ray of $D(z)$. + +**Proposition 4.1.** If $|\eta| < \pi(1 + \omega)$, the formula + +$$ +\Upsilon(z) = \sin(\pi z) \sin\left(\frac{\pi}{\omega}z\right) \int_{\gamma(z)} e^{\frac{ipz}{\omega}} \frac{\sigma(p+\eta-il)\sigma(p-\eta+il)}{\sigma(p-\eta-il)\sigma(p+\eta+il)} dp \quad (4.9) +$$ + +defines a solution of (1.8) analytic in $z \in C \setminus \mathbb{R}$. This solution is also analytic in $\eta$. + +*Proof.* The description of the poles and zeros of the $\sigma$-function implies the analyt- +icity of the integrand in $p \in C'$. The convergence and analyticity of the integral +in (4.9) follow from estimates (4.2) and (4.3) and from the definition of the curve +$\gamma(z)$. Let us check that $\Upsilon$ solves (1.8). Denote the contour integral by $X(z)$, +and denote the integrand by $e^{\frac{ipz}{\omega}} \hat{X}(p)$. Equation (1.8) for $\Upsilon$ is equivalent to the +equation + +$$ +\begin{equation} +\begin{split} +& \sin(\pi(z+\omega))X(z+\omega) + \sin(\pi(z-\omega))X(z-\omega)+ \\ +& \qquad + 2(\cos\eta \operatorname{ch} l \sin(\pi z) + \sin\eta \operatorname{sh} l \cos(\pi z)) X(z) = 0. +\end{split} +\tag{4.10} +\end{equation} +$$ + +Assume that $\gamma = \gamma(z)$ goes to $\pm i\infty$ along rays from the set $D(z-\omega) \cap D(z+\omega)$. +Then this curve can be used as the integration contour in the representations for +each of the functions $X(z)$, $X(z-\omega)$ and $X(z+\omega)$. This allows to transform (4.10) +to the equation + +$$ +\int_{\gamma + \pi\omega} e^{\frac{i(p+\omega)z}{\omega}} (1 + e^{-i(p+\eta-il)}) (1 + e^{-i(p-\eta+il)}) \hat{X}(p-\pi\omega) dp - \\ +- \int_{\gamma - \pi\omega} e^{\frac{i(p+\omega)z}{\omega}} (1 + e^{-i(p-\eta-il)}) (1 + e^{-i(p+\eta+il)}) \hat{X}(p+\pi\omega) dp = 0. \quad (4.11) +$$ + +Equation (4.1) and the definition of $\hat{X}$ imply that + +$$ +\hat{X}(p + \pi\omega) = \frac{(1 + e^{-i(p+\eta-il)})(1 + e^{-i(p-\eta+il)})}{(1 + e^{-i(p-\eta-il)})(1 + e^{-i(p+\eta+il)})} \hat{X}(p - \pi\omega). +$$ + +So, it suffices to check that, in (4.11), one can replace $\gamma \pm \pi\omega$ by $\gamma$. Consider the first integral in (4.11), the second one can be treated similarly. Translate $\gamma + \pi\omega$ to $\gamma$ along the real line. Asymptotics (4.2) and (4.3) imply that, translating the integration contour, we do not break the convergence of the integral. So, we need only to check that, when being translated, the integration contour does not cross \ No newline at end of file diff --git a/samples/texts/3931795/page_12.md b/samples/texts/3931795/page_12.md new file mode 100644 index 0000000000000000000000000000000000000000..6b633be852e43b63b318db88394b0995fdc142e8 --- /dev/null +++ b/samples/texts/3931795/page_12.md @@ -0,0 +1,42 @@ +any poles of the integrand. The description of the zeros and poles of $\sigma$ shows that +the contour can cross only the poles of $\hat{X}(\cdot - \pi\omega)$ located at $p = \pm(\eta - il) - \pi$. +These poles being simple, the expression $(1+e^{-i(p+\eta-il)})(1+e^{-i(p-\eta+il)})\hat{X}(p-\pi\omega)$ +has no singularities at $p = \pm(\eta - il) - \pi$. This implies the desired. $\square$ + +**Remark 4.1.** Using equation (4.6), one can directly check that $\Upsilon$ solves (2.13). +By means of (4.7), one proves that $\Upsilon(\bar{z}) = -\bar{\Upsilon}(z)$. + +4.2. Real z. Here, we prove + +**Proposition 4.2.** The solution $\Upsilon$ can be continued to a meromorphic function that may have poles only at $z = \pm(\omega k + m)$, $k, m \in \mathbb{N}$. + +*Proof.* If $\mathrm{Im}\,z \neq 0$, (4.9) implies that + +$$ +\begin{align*} +\Upsilon(z) &= -\frac{1}{4} \int_{\gamma(z)} \left( e^{\frac{i(p+\pi\omega+\pi)z}{\omega}} - e^{\frac{i(p+\pi\omega-\pi)z}{\omega}} - e^{\frac{i(p-\pi\omega+\pi)z}{\omega}} + e^{\frac{i(p-\pi\omega-\pi)z}{\omega}} \right) \hat{X}(p) dp \\ +&= -\frac{1}{4} \sum_{s_1,s_2=\pm 1} s_1 s_2 \int_{\gamma(z)+s_1\pi\omega+s_2\pi} e^{\frac{ipz}{\omega}} \hat{X}(p-s_1\pi\omega-s_2\pi) dp \\ +&= -\frac{1}{4} \sum_{s_1,s_2=\pm 1} s_1 s_2 \int_{\gamma(z)} e^{\frac{ipz}{\omega}} \hat{X}(p-s_1\pi\omega-s_2\pi) dp + 2\pi i R(z), +\end{align*} +$$ + +where $e^{ipz/\omega} \hat{X}$ is the integrand in (4.9), and $R(z)$ denotes the sum of the residues appeared when deforming the integration contour. The function $R$ is entire. To analyse it, we note that, in (4.9), both the integrand and the part of the integration countur situated in $\{|Im\,p| \le 2l\}$ are independent of $z$. Furthermore, the poles of the integrand are located on the lines $Im\,p = \pm l$. This implies that $R$ is given by one and the same formula both for $Im\,z > 0$ and for $Im\,z < 0$. By means of (4.1) and (4.6), the last representation for $\Upsilon$ can be transformed to the form + +$$ \Upsilon(z) = \int_{\gamma(z)} \frac{e^{\frac{ipz}{\omega}} A \hat{X}(p - \pi - \pi\omega)}{(\cos p - \cos(\eta + il))(\cos(p/\omega) - \cos((\eta + il)/\omega))} dp + 2\pi i R(z), $$ + +where $A = \operatorname{sh} l \operatorname{sh}(l/\omega) \sin\eta \sin(\eta/\omega)$. + +Using (4.2) and (4.3), one can easily see that, for any fixed $C > 0$, in $K_C$, the integrand admits the estimates $O(e^{i(z-1-\omega)p/\omega})$ for $p \to -i\infty$ and $O(e^{i(z+1+\omega)p/\omega})$ for $p \to +i\infty$. + +Assume that $|\operatorname{Re} z| < 1+\omega$. Thanks to the last two estimates, we can deform the integration contour to the imaginary axis (both for $\operatorname{Im} z > 0$ and $\operatorname{Im} z < 0$). In the strip $|\operatorname{Re} z| < 1+\omega$, the obtained contour integral converges for all $z$ and defines an analytic function. Therefore, $\Upsilon$ is analytic in the strip $\{z \in \mathbb{C} : |\operatorname{Re} z| < 1+\omega\}$. It can be continued to a meromorphic one directly via equation (1.8). This equation also implies the statement on the poles of $\Upsilon$. $\square$ + +**4.3. Behavior of Υ for Im z → ±∞.** Here, first, we get the asymptotics of Υ for Im z → ±∞, and then, we check that Υ is a minimal meromorphic solution of (1.8). + +**Proposition 4.3.** Fix $C > 0$. If $|\eta| < \pi(1 + \omega)$, then, in $K_C$, + +$$ +\begin{align} +\Upsilon(z) &= e^{(l-i\eta)z/\omega}(a_+ + o(1)) + e^{-(l-i\eta)z/\omega}(a_- + o(1)), && \text{Im } z \to +\infty, \tag{4.12} \\ +\Upsilon(z) &= e^{(l+i\eta)z/\omega}(b_+ + o(1)) + e^{-(l+i\eta)z/\omega}(b_- + o(1)), && \text{Im } z \to -\infty, \tag{4.13} +\end{align} +$$ \ No newline at end of file diff --git a/samples/texts/3931795/page_13.md b/samples/texts/3931795/page_13.md new file mode 100644 index 0000000000000000000000000000000000000000..6704aafd45867e075dd720e9cc5918397eaa4455 --- /dev/null +++ b/samples/texts/3931795/page_13.md @@ -0,0 +1,43 @@ +where + +$$ +a_{\pm} = \frac{\pi i \sigma(\pi(1+\omega) \mp 2\eta)\sigma(\pi(1+\omega) \mp 2il)}{2 \sigma(\pi(1+\omega) \mp 2(\eta+il))} \operatorname{res}_{p=\pi(1+\omega)} \frac{1}{\sigma(p)}, \quad (4.14) +$$ + +$$ +b_{\pm}(\eta, l) = -\overline{a_{\pm}(\eta, l)}. \tag{4.15} +$$ + +**Remark 4.2.** Formulas (4.14) and the description of the zeros of the $\sigma$-function imply that $a_-=b_-=0$ at $\eta=0, \omega, 2\omega\dots$ + +*Proof.* Assume that $z \in C_+ \cap K_C$. As $z \notin \mathbb{R}$, we use (4.9). For sufficiently small $\delta > 0$, for all $z \in C_+ \cap K_C$, $D(z)$ contains rays parallel to the vectors $e^{\pm i\delta}$. Therefore, for all $z \in C_+ \cap K_C$, in (4.9), we can choose one and the same integration contour $\gamma$. + +When being translated to the right along the real line, the integration contour can cross poles of the integrand. These are zeros of the denominator in (4.9) located at the points $\pm-il+\eta)+\pi(1+\omega)+2\pi(n+\omega m)$, $n,m=0,1,2,\dots$. Let us translate the contour $\gamma$ to $\gamma+c$, where $c>0$ is chosen so that, in the course of translation, the contour crosses the poles at $\pm-il+\eta)+\pi(1+\omega)$ and that, after the translation, it does not contain any pole of the integrand. + +The function $\Upsilon$ equals the sum of the term $I(z)$ containing the integral along $\gamma + c$ and $S(z)$, the sum of (a finite number of) the contributions of the residues appeared when deforming $\gamma$ to $\gamma + c$. One has + +$$ +S(z) = e^{(l-i\eta)z/\omega}(a_+ + o(1)) + e^{-(l-i\eta)z/\omega}(a_- + o(1)), \quad \text{Im } z \to +\infty, \quad (4.16) +$$ + +with $a_\pm$ given by (4.14). When deriving the last formula, we take into account the fact that the zero of $\sigma$-function at $p = \pi(1 + \omega)$ is simple. Let us estimate $I(z)$. Along $\gamma + c$, $|\hat{X}(p)|$ is bounded. Therefore, + +$$ +|I(z)| \leq \text{Const } e^{\pi(1+\omega)|\text{Im } z|/\omega} \int_{\gamma+c} |e^{ipz/\omega} dp| +$$ + +$$ += \text{Const} e^{\pi(1+\omega)|\text{Im } z|/\omega - c \text{Im } z} \int_{\gamma} |e^{ipz/\omega} dp|. +$$ + +The last integral converges. When Im $z$ increases, it increases exponentially. Therefore, if $c$ is sufficiently large, then, as Im $z \to +\infty$, the $I$ integral becomes small with respect to both exponentials in (4.16). This implies (4.12). + +The proof of (4.13) is similar to the proof of (4.12), but one translates $\gamma$ to the left. Omitting the details, we note only that, to get (4.15), one has to use (4.7). $\square$ + +Now, one easily checks the main statement of the section: + +**Theorem 4.1.** The solution $\Upsilon$ is a minimal meromorphic solution to (1.8); its asymptotic coefficients are given in (4.14) and (4.15). + +Theorem 2.2 is an immediate corollary of this theorem. + +*Proof.* By Proposition 4.2, $\Upsilon$ is analytic in $|{\rm Re}\,z| \le \pi\omega$. Consider the coefficients $A_\pm$ in formula (2.10) representing $\psi = \Upsilon$ as a linear combination of the canonical basis solutions $u_\pm$. In view of Section 2.1, $A_\pm(z) = \pm w(\Upsilon(z), u_\mp(z)) / w(u_+(z), u_-(z))$. Assume that $z \in C_+$ is in the $\omega$-neighborhood of the line $i(l+i\eta)\mathbb{R}$. Then $e^{\pm(l-i\eta)z/\omega}$ are of order of one, and using (4.12), (2.5) and (2.6), we get $A_\pm(z) = a_\pm + o(1)$ as $\mathrm{Im}\,z \to +\infty$. This representation is uniform in $\mathrm{Re}\,z$. As $A_\pm$ are $\omega$-periodic, these representations remain valid and uniform in $C_+$. One studies the coefficients $B_\pm$ \ No newline at end of file diff --git a/samples/texts/3931795/page_14.md b/samples/texts/3931795/page_14.md new file mode 100644 index 0000000000000000000000000000000000000000..7ba852dfe7f8568814a207c0f498570c75f86816 --- /dev/null +++ b/samples/texts/3931795/page_14.md @@ -0,0 +1,29 @@ +in the representation (2.11) for $\psi = \Upsilon$ similarly. This leads to the statement of the theorem. $\square$ + +**4.4. Construction of the canonical Bloch solutions.** Here, using techniques developed in [3] and [7], we prove Theorem 2.1. The proof is carried out in several steps: + +1. Consider equation (3.8) equivalent to (1.8). As Im $z \to +\infty$, in this equation, the matrix takes the form $F(z, \eta, l) = \begin{pmatrix} 2\cos(\eta+il) & -1 \\ 1 & 0 \end{pmatrix} + O(e^{-2\pi\operatorname{Im}z})$. The eigenvalues of the leading term equal $\nu^{\pm 1}$, $\nu = e^{-i(\eta+il)}$. Put $\phi = V^{-1}\psi$, where $V = \begin{pmatrix} 1 & 1 \\ 1/\nu & \nu \end{pmatrix}$, and $\psi$ is a vector solution of (3.8). In a neighborhood of $+i\infty$, $\phi$ solves the equation + +$$ \phi(z+h) = (D + m(z))\phi(z), \quad D = \begin{pmatrix} \nu & 0 \\ 0 & 1/\nu \end{pmatrix}, \quad m(z) = O(e^{-2\pi\operatorname{Im}z}). \quad (4.17) $$ + +2. Let $\phi_1(z)$ and $\phi_2(z)$ be the first and the second components of the vector $\phi(z)$. Put $\Phi(z) = \phi_2(z)/\phi_1(z)$. Then + +$$ \Phi(z+\omega) = \frac{(1/\nu + m_{22}(z))\Phi(z) + m_{21}(z)}{\nu + m_{11}(z) + m_{12}(z)\Phi(z)}. \quad (4.18) $$ + +We construct a solution of this equation by means of the technique described in Section 4.1.1 from [7]. Consider the sequence of functions defined by the formulas + +$$ \Phi_{n+1}(z+\omega) = \frac{(1/\nu + m_{22}(z))\Phi_n(z) + m_{21}(z)}{\nu + m_{11}(z) + m_{12}(z)\Phi_n(z)}, \quad n \ge 0, \quad \Phi_0(z) \equiv 0. $$ + +Let $D \subset \mathbb{C}$ be a domain. Repeating the proof of Proposition 4.1 from [7], we show that if $|\nu| > 1$, and $\sup_{z \in D} |m(z)|$ is sufficiently small, then, for all $n \in \mathbb{N}$ and $z \in D$, $|\Phi_n(z)| \le 1$, the sequence $\{\Phi_n\}$ converges uniformly in $z \in D$, and the limit $\Phi$ solves (4.18). As, in a neighborhood of $+i\infty$, $m$ is analytic and 1-periodic and satisfies the estimate in (4.17), we conclude that, in a neighborhood of $+i\infty$, there exists an analytic 1-periodic bounded solution $\Phi$ of equation (4.18). As $\Phi$ is 1-periodic and bounded, it can be represented by the Fourier series of the form $\Phi(z) = \sum_{m=0}^{\infty} q_n e^{2\pi imz}$. Substituting it into (4.18), one checks that $\Phi(z) = O(e^{-2\pi\operatorname{Im}z})$ as Im $z \to +\infty$. + +3. If $\Phi$ solves (4.18) and $\phi_1$ satisfies the equation + +$$ \phi_1(z+\omega) = (\nu + m_{11}(z) + m_{12}(z)\Phi(z))\phi_1(z), \quad (4.19) $$ + +then the vector with the components $\phi_1(z)$ and $\phi_2 = \phi_1(z)\Phi(z)$ solves (4.17). + +4. Let $\Phi$ be the function constructed in the second step. To construct a solution of (4.19), we use Lemma 2.3 from [3]. It can be formulated in the following way: + +**Lemma 4.1.** Let $g$ be a 1-periodic function analytic in a neighborhood of $+i\infty$ such that $g(i\infty) = 0$. Then equation $f(z+\omega) - f(z) = g(z)$ with a fixed $0 < \omega < 1$ has a solution analytic in a neighborhood of $+i\infty$ and decreasing as Im $z \to +\infty$ uniformly in $\{z \in \mathbb{C} : |\operatorname{Re} z| \le C\}$, where $C > 0$ is an arbitrary fixed constant. + +Define $A(z) = (\nu + m_{11}(z) + m_{12}(z)\Phi(z))$. The estimates for $\Phi$ and $m$ for Im $z \to +\infty$ imply that $A(z) = \nu(1+o(1))$. Choose the branch of $B = \ln A$ so that $B = -i(\eta+il) + g$ and $g(z) = o(1)$ as Im $z \to +\infty$. Let $f$ be the function constructed by means of Lemma 4.1 in terms of $g$. Then $\phi_1(z) = e^{-i(\eta+il)z/\omega+f(z)}$ \ No newline at end of file diff --git a/samples/texts/3931795/page_15.md b/samples/texts/3931795/page_15.md new file mode 100644 index 0000000000000000000000000000000000000000..6fdeb065e9c6ee33b9c1961b0928231d3a4f7bb5 --- /dev/null +++ b/samples/texts/3931795/page_15.md @@ -0,0 +1,38 @@ +solves (4.19). Using the observation made at the third step, one constructs in terms +of $\phi_1$ a solution of (4.17) analytic in a neighborhood of $+i\infty$ and such that, as +$\operatorname{Im} z \to +\infty$, $\phi(z) = e^{-i(\eta+il)z/\omega} \left( \binom{1}{0} + o(1) \right)$ uniformly in $\{z \in \mathbb{C} : |\operatorname{Re} z| \le C\}$, +$C > 0$ being a fixed constant. + +5. The function $\phi$ is a Bloch solution, i.e., $\phi(z+1) = \alpha(z)\phi(z)$, where $\alpha$ is an $\omega$-periodic. Indeed, a direct calculation shows that, for any solution $f$ of the equation $f(z+\omega) - f(z) = g(z)$ with a given 1-periodic function $g$, $f(z+1) - f(z)$ is $\omega$-periodic. This implies that $\alpha(z) = \phi_1(z+1)/\phi_1(z)$ is $\omega$-periodic. As $\Phi$ is 1-periodic, one has $\phi_2(z+1)/\phi_2(z) = \phi_1(z+1)/\phi_1(z)$. This implies the needed. + +6. Fix $C_1 > 0$. Let us show that the asymptotics of $\phi$ is uniform in $K_{C_1}$. Consider the coefficient $\alpha$ from the definition of the Bloch solution $\phi$. As it is $\omega$-periodic, the asymptotics for $\phi$ in $\{z \in \mathbb{C} : |\operatorname{Im}z| \le C\}$ implies that, as $\operatorname{Im}z \to +\infty$, $\alpha(z) = e^{-i(\eta+\operatorname{Im}z)/\omega} + O(e^{-2\pi\operatorname{Im}z/\omega})$ uniformly in $\operatorname{Re}z$. Assume that $z \in K_C$. Let $N$ be the integer part of $\operatorname{Re}z$. One has + +$$ \phi(z) = \left( \prod_{n=1}^{N} \alpha(z-n) \right) \phi(z-N) = e^{-i(\eta+\mathrm{Im}N)/\omega} + O(\operatorname{Im} z e^{-2\pi\operatorname{Im}z/\omega}) \phi(z-N). $$ + +Substituting in this formula the asymptotics for $\phi$ justified for bounded $|\operatorname{Re}z|$, we obtain the needed. + +7. One can easily see that the first component $\psi_1$ of a vector solution $\psi$ of (3.8) satisfies (1.8). Let $\psi = V\phi$, where $\phi$ the solution of (4.18) constructed in the previous steps. By the result of the first step, $\psi$ solves (3.8). We construct solutions $u_\pm$ of (1.8) by the formulas $u_+(z) = \psi_1(z)$ and $u_-(z) = \overline{u_+(\bar{-z})}$. One can easily check that these solutions have all the properties listed in Theorem 2.1. We omit the elementary calculations. + +REFERENCES + +[1] V. Babich, M. Lyalinov and V. Grikurov. *Diffraction theory: the Sommerfeld-Malyuzhinets technique*. Oxford, UK: Alpha Science, 2008. + +[2] V. Buslaev, A. Fedotov. Bloch solutions of difference equations. *St.Petersburg Math. J.* 1996, 7(4): 561-594, 1996. + +[3] V. Buslaev, A. Fedotov. On the difference equations with periodic coefficients. *Advances in Theor. and Math. Phys.* 5(6):1105-1168, 2001. + +[4] H.L. Cycon, R.G. Froese, W. Kirsch, B. Simon. Schrödinger operators, with application to quantum mechanics and global geometry. Berlin, Springer-Verlag, 1987. + +[5] L. Faddeev, R. Kashaev and A. Volkov. Strongly coupled quantum discrete Liouville theory. I: Algebraic approach and duality. *Commun. Math. Phys.* 219:199-219, 2001. + +[6] A. Fedotov. Monodromization method in the theory of almost periodic equations. *Algebra i Analys [In Russian]* 25(2):203-236, 2013. To appear in English in *St.Petersburg Math. J*. + +[7] A. Fedotov and F. Klopp. Strong resonant tunneling, level repulsion and spectral type for one-dimensional adiabatic quasi-periodic Schrödinger operators. *Annales Scientifiques de l'Ecole Normale Supérieure, 4e série*, 38(6):889-950, 2005. + +[8] A. Fedotov and F. Klopp. Pointwise Existence of the Lyapunov Exponent for a Quasiperiodic Equation. *Mathematical results in quantum mechanics*. Eds: Ingrid Beltita, Gheorghe Nenciu & Radu Purice. World Sci. Pub., 2008, 55-66. + +[9] A. Fedotov, F. Klopp. An exact renormalization formula for Gaussian exponential sums and applications. *American Journal of Mathematics* 134(3): 711-748, 2012. + +[10] A. Figotin, L. Pastur, Spectra of random and almost periodic operators. Springer-Verlag, 1991. + +[11] S. Fishman, D.R. Grempel, R.E. Prange Wave functions at a mobility edge: An example of a singular continuous spectrum. *Phys. Rev. B*, 28(12):7370-7372, 1983. \ No newline at end of file diff --git a/samples/texts/3931795/page_2.md b/samples/texts/3931795/page_2.md new file mode 100644 index 0000000000000000000000000000000000000000..7c01b91e7cd14d57055eba561e3a700b09384bfc --- /dev/null +++ b/samples/texts/3931795/page_2.md @@ -0,0 +1,41 @@ +To describe the main result, define the parameters $l > 0$ and $-\pi < \eta < \pi$ in terms of $E$ and $\lambda$ so that $E + i\lambda = 2\cos(\eta + il)$. Then + +$$ \lambda = -2\operatorname{sh} l \sin \eta, \quad E = 2\operatorname{ch} l \cos \eta. \qquad (1.2) $$ + +Put + +$$ \mathcal{F}(z, \eta, l) = \begin{pmatrix} 2\operatorname{ch} l \cos \eta + 2\operatorname{sh} l \sin \eta \cot(\pi z) & -1 \\ 1 & 0 \end{pmatrix}. \qquad (1.3) $$ + +The Maryland equation (1.1) is equivalent to the equation + +$$ \Psi_{k+1} = \mathcal{F}(k\omega + \theta, \eta, l)\Psi_k, \quad k \in \mathbb{Z} \qquad (1.4) $$ + +(write down the equation for the first component of a vector solution of (1.4) !). Let $P_k(\omega, \theta, \eta, l)$ be the matrix solution of (1.4) that is equal to the identity matrix for $k=0$. Note that + +$$ P_k(\omega, \theta, \eta, l) = \mathcal{F}(\theta + (k-1)\omega, \eta, l) \dots \mathcal{F}(\theta + \omega, \eta, l) \mathcal{F}(\theta, \eta, l), \quad k \ge 1, $$ + +$$ P_k(\omega, \theta, \eta, l) = \mathcal{F}(\theta + k\omega, \eta, l)^{-1} \dots \mathcal{F}(\theta - 2\omega, \eta, l)^{-1} \mathcal{F}(\theta - \omega, \eta, l)^{-1}, \quad k \le -1. $$ + +The main result is described by + +**Theorem 1.1.** For any $N \in \mathbb{Z}$, one has + +$$ P_N(\omega, \theta, \eta, l) = \Psi(\{\theta + N\omega\}, \eta, l) \sigma_2 P_{N_1}(\omega_1, \theta_1, \eta_1, l_1) \sigma_2 \Psi^{-1}(\theta, \eta, l), \quad (1.5) $$ + +where + +$$ N_1 = -[\theta + N\omega], \quad \omega_1 = \left\{ \frac{1}{\omega} \right\}, \quad \theta_1 = \left\{ \frac{\theta}{\omega} \right\}, \quad \eta_1 = \frac{\eta}{\omega} \mod 2\pi, \quad l_1 = \frac{l}{\omega}, \quad (1.6) $$ + +[$x$] and $\{x\}$ denote the integer and the fractional parts of $x \in \mathbb{R}$, + +$$ \Psi(z, \eta, l) = \begin{pmatrix} \psi(z, \eta, l) & \psi(z-1, \eta, l) \\ \psi(z-\omega, \eta, l) & \psi(z-1-\omega, \eta, l) \end{pmatrix}, \quad \sigma_2 = \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix}, \quad (1.7) $$ + +and $\psi$ is the minimal meromorphic solution of the “complex Maryland equation” + +$$ \psi(z+\omega) + \psi(z-\omega) + \lambda \cot(\pi z)\psi(z) = E\psi(z), \quad z \in \mathbb{C}. \qquad (1.8) $$ + +The importance of the minimal entire solutions, i.e., the solutions having the slowest possible growth for $\operatorname{Im} z \to \pm\infty$, for the study of difference equations with entire periodic coefficients was revealed in [3]. For equation (1.8), the definition of the minimal meromorphic solution is formulated in Section 2. In the same section, we find out that this solution satisfies one more complex Maryland equation with new parameters. This is one of the key observations leading to the renormalization formula (1.5). The minimal solution is constructed in Section 4, where we obtain integral representations for it. + +The above renormalization formula for the matrix product is as explicit as the renormalization formula obtained in [9] for the Gaussian exponential sums. These formulas have a similar structure. In (1.5), $N_1 \sim -\omega N$ for large $N$, and, as $0 < \omega < 1$, the analysis of the matrix product $P_N(\omega, \theta, \eta, l)$ with a large number of factors is reduced to analysis of an analogous product with a smaller one. It is important to note that the factors $\Psi(\dots)$ in the right-hand side of (1.5) have to be controlled only on the interval $[0, 1]$. + +As in [9], one can easily show that, after a finite number (of order of $\log N$) of the renormalizations applied consequently to the matrix products $P_N(\omega, \theta, \eta, l)$, $P_{N_1}(\omega_1, \theta_1, \eta_1, l_1)$ etc., one can reduce the number of factors to one. In the analysis of the Gaussian sums, the main role was played by quasiclassical effects arising \ No newline at end of file diff --git a/samples/texts/3931795/page_3.md b/samples/texts/3931795/page_3.md new file mode 100644 index 0000000000000000000000000000000000000000..85bed9d14ec33ea44cdfb3dcc1f1794384268dba --- /dev/null +++ b/samples/texts/3931795/page_3.md @@ -0,0 +1,28 @@ +when the frequency $\omega_L = \{1/\omega_{L-1}\}$, $L \ge 1$, $\omega_0 = \omega$, is small. In the case of the Maryland equation, there is an additional effect. It is well known that the product $\omega_0\omega_1 \dots \omega_L$ exponentially decreases when $L$ grows. Therefore, after many renormalizations, one encounters the large parameter $l/(\omega_0\omega_1\dots\omega_L)$. So, one can expect that the analysis of the behavior of $P_N(\omega, \theta, \eta, l)$ for large $N$ can be very effective. We plan to employ this idea in our next publication. + +Theorem 1.1 is obtained in Section 3. Its proof is based on monodromization method ideas. This method is a general renormalization approach suggested by V. Buslaev and A. Fedotov for studying difference equations on $\mathbb{R}$ with periodic coefficients. It was developed further in papers of A. Fedotov and F. Klopp, see the review article [6]. In Section 3, we describe the monodromization idea and give a proof of a general renormalization formula for the case of difference equations on $\mathbb{Z}$ with coefficients being restrictions to $\mathbb{Z}$ of functions defined and periodic on $\mathbb{R}$. Note that a similar formula was stated without proof in [8]. Formula (1.5) is a corollary from the general one and from the observation that the minimal meromorphic solution of the complex Maryland equation satisfies one more complex Maryland equation (with new parameters). This observation is equivalent to the fact that *the complex Maryland equation is invariant with respect to monodromization*. The reader finds more details in Section 3. + +## 2. MINIMAL SOLUTIONS + +In this section, we discuss the difference equations on the complex plane only. Equation (1.8) is invariant with respect to multiplication by $e^{2\pi i z/\omega}$. Therefore, if it has a meromorphic solution, it has meromorphic solutions growing as quickly as desired when $\operatorname{Im} z \to \pm\infty$. To define the minimal meromorphic solution, i.e., the solution having the slowest growth for $\operatorname{Im} z \to \pm\infty$, one has to impose some natural conditions on the set of its poles. To give the precise definition, we need to discuss the set of solutions of (1.8). + +### 2.1. Solutions of difference equations. +Let us list well-known elementary properties of the solutions of the equation + +$$\psi(z + \omega) + \psi(z - \omega) + v(z)\psi(z) = 0, \quad z \in \mathbb{C}, \qquad (2.1)$$ + +where $v$ is a given function, and $\omega > 0$ is a given number. + +Let $\psi$ and $\tilde{\psi}$ be two solutions to (2.1). It can be easily seen that the expression + +$$w(\psi(z), \tilde{\psi}(z)) = \psi(z)\tilde{\psi}(z - \omega) - \psi(z - \omega)\tilde{\psi}(z) \qquad (2.2)$$ + +is $\omega$-periodic in $z$. It is called the Wronskian of $\psi$ and $\tilde{\psi}$. + +If $w(\psi(z), \tilde{\psi}(z)) \neq 0$ for all $z$, one can show that any other solution $\phi$ admits the representation + +$$\phi(z) = a(z)\psi(z) + b(z)\tilde{\psi}(z), \quad z \in \mathbb{C}, \qquad (2.3)$$ + +with some $\omega$-periodic $a$ and $b$. This implies that the solution space of (2.1) is a two-dimensional module over the ring of $\omega$-periodic functions. Note that (2.3) and the Wronskian definition imply that + +$$a(z) = \frac{w(\phi(z), \tilde{\psi}(z))}{w(\psi(z), \tilde{\psi}(z))}, \quad b(z) = \frac{w(\psi(z), \phi(z))}{w(\psi(z), \tilde{\psi}(z))}. \qquad (2.4)$$ \ No newline at end of file diff --git a/samples/texts/3931795/page_4.md b/samples/texts/3931795/page_4.md new file mode 100644 index 0000000000000000000000000000000000000000..852076c7eb78fcb4e662a48b477235158b67c7e8 --- /dev/null +++ b/samples/texts/3931795/page_4.md @@ -0,0 +1,43 @@ +2.2. The simplest solutions to the complex Maryland equation in a neighborhood of $\pm i\infty$. The periodicity of the potential in the complex Maryland equation allows to consider $+i\infty$ and $-i\infty$ as two singular points. For $Y \in \mathbb{R}$, we call the half-plane $\mathbb{C}_+(Y) = \{z \in \mathbb{C} : \operatorname{Im} z > Y\}$ a neighborhood of $+i\infty$, and we call $\mathbb{C}_-(Y) = \{z \in \mathbb{C} : \operatorname{Im} z < Y\}$ a neighborhood of $-i\infty$. + +The minimal meromorphic solutions of the complex Maryland equation are defined in terms of the solutions having the “simplest” behavior in neighborhoods of $\pm i\infty$. The latter are described in + +**Theorem 2.1.** For sufficiently large $Y > 0$, in $\mathbb{C}_+(Y)$, there exist analytic solutions $u_{\pm}$ to the complex Maryland equation such that + +$$u_{\pm}(z) = e^{\pm \frac{l-i\eta}{\omega}} z (1 + o(1)), \quad \operatorname{Im} z \to +i\infty, \qquad (2.5)$$ + +uniformly in $z \in K_c = \{z \in \mathbb{C} : |\operatorname{Im} z| \ge C|\operatorname{Re} z|\}$, where $C > 0$ is an arbitrary fixed constant. In the terminology of [2], $u_{\pm}$ are Bloch solutions, i.e., $u_{\pm}(z+1) = \alpha_{\pm}(z)u_{\pm}(z)$ with some $\omega$-periodic factors $\alpha_{\pm}$. + +This theorem is proved in Section 4.4. + +In a neighborhood of $-i\infty$, one can construct solutions $d_{\pm}$ similar to $u_{\pm}$. It is convenient to define them by the formulas + +$$d_{\pm}(z) = \overline{u_{\pm}(z)}. \qquad (2.6)$$ + +Representations (2.5) imply that + +$$w(u_+, u_-) = e^{l-i\eta} - e^{-l+i\eta} + o(1), \quad \operatorname{Im} z \to +\infty. \qquad (2.7)$$ + +Therefore, for sufficiently large $Y$, the solutions $u_{\pm}$ form a basis for the space of the solutions defined on $\mathbb{C}_+(Y)$. Similarly, $d_{\pm}$ form a basis for the space of the solutions defined on $\mathbb{C}_-(-Y)$. We call the couples $(u_{\pm})$ and $(d_{\pm})$ the *canonical bases* for neighborhoods of $+i\infty$ and $-i\infty$, respectively. + +We define $\alpha_+(z) = u_+(z+1)/u_+(z)$ and $\beta_+(z) = d_+(z+1)/d_+(z)$. The functions $\alpha_{\pm}$ and $\beta_{\pm}$ are $\omega$-periodic. Representations (2.5) and (2.6) imply that + +$$\alpha_{\pm}(z) = e^{\pm \frac{l-i\eta}{\omega}} (1 + o(1)), \quad \operatorname{Im} z \to +i\infty, \qquad (2.8)$$ + +$$\beta_{\pm}(z) = e^{\pm \frac{l+i\eta}{\omega}} (1 + o(1)), \quad \operatorname{Im} z \to -i\infty. \qquad (2.9)$$ + +**2.3. Minimal meromorphic solution of the complex Maryland equation.** + +Let $\psi$ be a solution of (1.8) analytic in the strip $S_0 = \{z \in \mathbb{C} : |\operatorname{Re} z| \le \omega\}$. + +**Remark 2.1.** Equation (1.8) implies that $\psi$ can be continued to a meromorphic function that can have poles only at the points $\pm(n+m\omega)$, $n, m \in \mathbb{N}$. Moreover, the poles located at the points $\pm(1+\omega)$ are simple. For this new function, we keep the old notation $\psi$. + +Let $Y$ be chosen as in Theorem 2.1. The solution $\psi$ admits the representations + +$$\psi(z) = A_+(z) u_+(z) + A_-(z) u_-(z), \quad z \in \mathbb{C}_+(Y), \qquad (2.10)$$ + +$$\psi(z) = B_+(z) d_+(z) + B_-(z) d_-(z), \quad z \in C_-(Y), \qquad (2.11)$$ + +with some $\omega$-periodic analytic coefficients $A_{\pm}$ and $B_{\pm}$. + +**Definition 2.1.** The solution $\psi$ is called a minimal meromorphic solution to (1.8) if the coefficients $A_{\pm}$ and $B_{\pm}$ are bounded in $\mathbb{C}_{\pm}(Y)$. \ No newline at end of file diff --git a/samples/texts/3931795/page_5.md b/samples/texts/3931795/page_5.md new file mode 100644 index 0000000000000000000000000000000000000000..0dd8607b129af1ef5630093ffa35fe1c116e2624 --- /dev/null +++ b/samples/texts/3931795/page_5.md @@ -0,0 +1,51 @@ +For a minimal meromorphic solution $\psi$, the limits $a_{\pm}$ of $A_{\pm}$ for $\operatorname{Im} z \to +\infty$ and the limits $b_{\pm}$ of $B_{\pm}$ for $\operatorname{Im} z \to -\infty$ exist and are equal to the zeroth Fourier coefficients of $A_{\pm}$ and $B_{\pm}$, respectively. We call $a_{\pm}$ and $b_{\pm}$ the asymptotic coefficients of the minimal solution $\psi$. + +In Section 4, we prove + +**Theorem 2.2.** For $|\eta| < \pi(1+\omega)$, there exists a minimal meromorphic solution $\psi$ of the complex Maryland equation. It is analytic in $\eta$, and its asymptotic coefficients do not vanish at $\eta \notin \omega\mathbb{Z}$. + +**Remark 2.2.** The minimal meromorphic solution described in this theorem can be continued to a meromorphic function of $\eta$ and $l$. + +The term “minimal” is explained by + +**Theorem 2.3.** Let $\psi$ be a minimal meromorphic solution, and let its asymptotic coefficients $a_{\pm}$ (or $b_{\pm}$) be non-zero. Then + +* any other minimal solution coincides with $\psi$ up to a constant factor; + +* if $\phi$ is a minimal solution, and one of its asymptotic coefficients is zero, then $\phi \equiv 0$. + +When proving this theorem, we use + +**Lemma 2.1.** Let $\psi$ be a minimal solution. Then + +$$ +\begin{aligned} +w(\psi(z+1), \psi(z)) &= a_+ a_- \left( e^{\frac{l-in}{\omega}} - e^{-\frac{l-in}{\omega}} \right) (e^{l-in} - e^{-l+in}) \\ +&= b_+ b_- \left( e^{\frac{l+in}{\omega}} - e^{-\frac{l+in}{\omega}} \right) (e^{l+in} - e^{-l-in}). +\end{aligned} + $$ + +*Proof.* Remark 2.1 implies that the Wronskian of $\psi$ and $\psi(.+1)$ is analytic in the strip $\{z \in \mathbb{C} : -1 < \operatorname{Re} z < \omega\}$. As the Wronskian is $\omega$-periodic, it is an entire function. + +Let us study the behavior of the Wronskian for $\operatorname{Im} z \to +\infty$. Recall that $u_{\pm}$ are Bloch solutions, $u_{\pm}(z+1) = \alpha_{\pm}(z)u_{\pm}(z)$, where $\alpha_{\pm}$ are $\omega$-periodic. By means of (2.10), we get + +$$ \psi(z+1) = A_+(z+1)\alpha_+(z)u_+(z) + A_-(z+1)\alpha_-(z)u_-(z). $$ + +From this formula, representation (2.10) for $\psi$, and the $\omega$-periodicity of $\alpha_{\pm}$ and $A_{\pm}$, we deduce that + +$$ +\begin{aligned} +w(\psi(z+1), \psi(z)) = w(u_+(z), u_-(z)) \cdot & \\ +& (A_+(z+1)A_-(z) \alpha_+(z) - A_-(z+1)A_+(z) \alpha_-(z)). +\end{aligned} + \quad (2.12) $$ + +Finally, the asymptotics (2.7), the definition of the asymptotic coefficients, and (2.8) imply that, as $\operatorname{Im} z \to +\infty$, the right-hand side in (2.12) tends to the first expression for the Wronskian given in Lemma 2.1. + +One similarly proves that, as $\operatorname{Im} z \to -\infty$, $w(\psi(z+1), \psi(z))$ tends to the second expression for the Wronskian described in this lemma. + +As the Wronskian is an entire periodic function that has finite limits as $\operatorname{Im} z \to \pm\infty$, it is bounded. Being a bounded entire function, the Wronskian is independent of $z$, and + +$$ w(\psi(z+1), \psi(z)) = \lim_{\operatorname{Im} z \to -\infty} w(\psi(z+1), \psi(z)) = \lim_{\operatorname{Im} z \to +\infty} w(\psi(z+1), \psi(z)). $$ + +This leads to the statement of the lemma. □ \ No newline at end of file diff --git a/samples/texts/3931795/page_6.md b/samples/texts/3931795/page_6.md new file mode 100644 index 0000000000000000000000000000000000000000..41e94ddd0017831736d99766ce07bd2c64940e80 --- /dev/null +++ b/samples/texts/3931795/page_6.md @@ -0,0 +1,25 @@ +Let us turn to the proof of Theorem 2.3. + +*Proof.* Let $\phi$ be one more minimal meromorphic solution of the complex Maryland equation. Assume that the asymptotic coefficients $a_{\pm}$ of $\psi$ are non-zero (the case of $b_{\pm} \neq 0$ is treated in the same way). The above lemma implies that the solutions $z \to \psi(z)$ and $z \to \psi(z+1)$ form a basis for the space of solutions of the complex Maryland equation. Therefore, $\phi$ admits representation (2.3) with $\tilde{\psi}(z) = \psi(z+1)$. Recall that the coefficients in this representation are described in (2.4). As when proving Lemma 2.1, one shows that the Wronskians $w(\phi(z), \tilde{\psi}(z))$ and $w(\psi(z), \phi(z))$ in (2.4) are independent of $z$. As $w(\psi(z+1), \psi(z))$ is also independent of $z$, see Lemma 2.1, to check the first statement of the theorem, it suffices to check that $w(\phi(z), \psi(z))$ vanishes at some $z$. Since $\psi$ and $\phi$ are analytic in $\{z \in \mathbb{C} : |\operatorname{Re} z| \le \omega\}$, the complex Maryland equation implies that both these solutions vanish at $z=0$. Therefore, $w(\phi(z), \psi(z))|_{z=0} = 0$. This completes the proof of the first statement of the theorem. + +Let us prove the second one. Lemma 2.1 shows that all the asymptotic coefficients of $\psi$ are non-zero. By the first statement, $\phi = C\psi$ with a constant $C$. Therefore, asymptotic coefficients of the solutions $\phi$ and $\psi$ are proportional with the same constant $C$. As one of the asymptotic coefficients of $\phi$ is zero, we have $C = 0$. Thus, $\phi = 0$. □ + +**2.4. Second difference equation for the minimal solutions.** A central property of the minimal solutions is described by + +**Theorem 2.4.** Let $\psi$ be a minimal meromorphic solution of the complex Maryland equation. Assume that its asymptotic coefficients are non-zero. Then it solves the equation + +$$ \psi(z+1) + \psi(z-1) + \lambda_1 \cot(\pi z / \omega) \psi(z) = E_1 \psi(z), \quad z \in \mathbb{C}, \qquad (2.13) $$ + +where + +$$ \lambda_1 = -2 \sin \eta_1 \operatorname{sh} l_1, \quad E_1 = 2 \cos \eta_1 \operatorname{ch} l_1, \qquad (2.14) $$ + +and $l_1$ and $\eta_1$ are related to $l$ and $\eta$ by the formulas in (1.6). + +**Remark 2.3.** As the minimal solution described in Theorem 2.2 is analytic in $\eta$, it solves (2.13) even if $\eta \in \omega\mathbb{Z}$, i.e., if its asymptotic coefficients vanish. + +*Proof.* Lemma 2.1 implies that $\psi$ and $\tilde{\psi} = \psi(\cdot + 1)$ form a basis for the solution space of the complex Maryland equation. The function $\phi = \psi(\cdot - 1)$ also solves this equation and, therefore, admits representation (2.3) with the periodic coefficients described by (2.4). As $w(\psi(z+1), \psi(z))$ is independent of $z$, see Lemma 2.1, the coefficient $b$ in this representation is identically equal to $-1$. So, to prove the theorem, it suffices to calculate the coefficient + +$$ a(z) = \frac{w(\psi(z-1), \psi(z+1))}{w(\psi(z), \psi(z+1))}. \qquad (2.15) $$ + +The Wronskian in the denominator in this formula is described by Lemma 2.1. Let us discuss the Wronskian in the numerator. Remark 2.1 implies that $z \to w(\psi(z-1), \psi(z+1))$ is a meromorphic $\omega$-periodic function analytic in the strip $0 < \operatorname{Re} z < \omega$ and that, on the boundary of the strip, it may have poles only at the \ No newline at end of file diff --git a/samples/texts/3931795/page_7.md b/samples/texts/3931795/page_7.md new file mode 100644 index 0000000000000000000000000000000000000000..4e7f0f2a7045946b9a49103837788a976983897f --- /dev/null +++ b/samples/texts/3931795/page_7.md @@ -0,0 +1,45 @@ +points $z = 0$ and $z = \omega$. By a reasoning similar to one from the proof of Lemma 2.1, one shows that, as $\operatorname{Im} z \to +\infty$, the Wronskian tends to + +$$a_+a_- \left( e^{-\frac{2(l-i\eta)}{\omega}} - e^{-\frac{2(l+i\eta)}{\omega}} \right) (e^{l-i\eta} - e^{-l+i\eta}),$$ + +and, as $\operatorname{Im} z \to -\infty$, it tends to + +$$b_+b_- \left( e^{-\frac{2(l+i\eta)}{\omega}} - e^{-\frac{2(l-i\eta)}{\omega}} \right) (e^{l+i\eta} - e^{-l-i\eta}).$$ + +We see that $a$ is a meromorphic $\omega$-periodic function, it may have poles only at $z \in \omega\mathbb{Z}$, these poles are simple, and $a$ tends to constants as $\operatorname{Im} z \to \pm\infty$. This implies that $a(z) = E_1 - \lambda_1 \cot(\pi z/\omega)$ with some constants $E_1$ and $\lambda_1$. + +The above asymptotics for $w(\psi(z-1), \psi(z+1))$ and the asymptotics for $w(\psi(z+1), \psi(z))$ in Lemma 2.1 imply that + +$$a(z) \rightarrow \begin{cases} e^{\frac{l-i\eta}{\omega}} + e^{-\frac{l-i\eta}{\omega}}, & \operatorname{Im} z \rightarrow +\infty, \\ e^{\frac{l+i\eta}{\omega}} + e^{-\frac{l+i\eta}{\omega}}, & \operatorname{Im} z \rightarrow -\infty. \end{cases}$$ + +Therefore, $E_1 = 2\cos(\eta/\omega)\operatorname{ch}(l/\omega)$ and $\lambda_1 = -2\sin(\eta/\omega)\operatorname{sh}(l/\omega)$. This completes the proof of the theorem. $\square$ + +### 3. MONODROMIZATION AND RENORMALIZATION FORMULAS + +In this section, first, following [6], we recall basic ideas of the monodromization theory, next, we prove a general renormalization formula for matrix cocycles, then, we describe some corollaries from these constructions for the Maryland equation and, after that, prove Theorem 1.1. + +#### 3.1. Monodromization. + +**3.1.1. Monodromy matrix.** Consider the matrix solutions of the equation + +$$\Psi(x+\omega) = M(x)\Psi(x), \quad x \in \mathbb{R}, \qquad (3.1)$$ + +where $M$ is a given 1-periodic SL(2, C)-valued function and $0 < \omega < 1$ is a fixed number. + +For any solution $\Psi$ of equation (3.1), $\det \Psi$ is an $\omega$-periodic function. We call a solution $\Psi$ *fundamental*, if $\det \Psi$ is independent of $x$ and does not vanish. Below, we assume that $\Psi$ is a fundamental solution. + +A function $\tilde{\Psi}: \mathbb{R} \rightarrow \text{GL}(2, \mathbb{C})$ solves (3.1) if and only if + +$$\tilde{\Psi}(x) = \Psi(x) \cdot p(x), \quad \forall x \in \mathbb{R}, \qquad (3.2)$$ + +where $p$ is an $\omega$-periodic matrix function. + +The function $x \rightarrow \Psi(x+1)$ is a solution of (3.1) together with $\Psi$. Therefore, + +$$\Psi(x+1) = \Psi(x) \cdot p(x), \quad p(x+\omega) = p(x), \quad \forall x \in \mathbb{R}.$$ + +The matrix + +$$M_1(x) = p^t(\omega x),$$ + +where $^t$ denotes transposition, is the *monodromy matrix* corresponding to the fundamental solution $\Psi$. Like the matrix $M$ from the input equation, the monodromy matrix is $\omega$-periodic and unimodular. \ No newline at end of file diff --git a/samples/texts/3931795/page_8.md b/samples/texts/3931795/page_8.md new file mode 100644 index 0000000000000000000000000000000000000000..1dc86f6dd7ae7b6024cc9840e63605af1a522448 --- /dev/null +++ b/samples/texts/3931795/page_8.md @@ -0,0 +1,31 @@ +3.1.2. Very short introduction to the monodromization theory. Let $\omega_1$ be the Gauss transform of $\omega$, i.e. $\omega_1 = \{\frac{1}{\omega}\}$. Consider the equation + +$$ \Psi_1(x + \omega_1) = M_1(x) \Psi_1(x), \quad x \in \mathbb{R}, \tag{3.3} $$ + +where $M_1$ is the monodromy matrix corresponding to a fundamental solution $\Psi$ of (3.1). We say that (3.3) is a monodromy equation obtained from (3.1) via monodromization. + +The monodromy equation (3.3) is similar to the input one: the matrices $M$ and $M_1$ are both unimodular and 1-periodic. Therefore, the *monodromization procedure* can be continued: one can consider the monodromy matrix corresponding to a fundamental solution of (3.3) and the corresponding monodromy equation and so on. In result, one arrives to an infinite sequence of difference equations similar to the input one. There are deep relationships between these equations (see, for example, Theorem 3.1). The leading idea of the monodromization method is to analyze solutions of the input equation by analyzing properties of the dynamical system that defines the coefficients of each equation in the sequence in terms of the coefficients of the previous one. + +3.1.3. Renormalization of matrix cocycles. Together with (3.1), consider the family of difference equations on $\mathbb{Z}$ + +$$ \Psi_{k+1} = M(\omega k + \theta) \Psi_k, \quad k \in \mathbb{Z}, \tag{3.4} $$ + +where $0 \le \theta < 1$ is the parameter indexing the equations. Let $k \to P_k(M, \omega, \theta)$ be the solution of (3.4) equal to the identity matrix when $k=0$. It is obvious that $P_k(M, \omega, \theta) = M(\omega(k-1)+\theta)\dots M(\omega+\theta)M(\theta)$, when $k>0$, and $P_k(M, \omega, \theta) = M^{-1}(\omega k + \theta)\dots M^{-1}(\omega - 2\theta)M(\omega - \theta)$, when $k<0$. + +**Theorem 3.1** (on renormalizations of matrix cocycles). Let $\Psi$ be a fundamental solution of (3.1), and let $M_1$ be the corresponding monodromy matrix. Then, for all $N \in \mathbb{Z}$, + +$$ P_N(M, \omega, \theta) = \Psi(\{\theta + N\omega\})\sigma_2 P_{N_1}(M_1, \omega_1, \theta_1)\sigma_2\Psi^{-1}(\theta), \tag{3.5} $$ + +$$ N_1 = -[\theta + N\omega], \quad \omega_1 = \{1/\omega\}, \quad \theta_1 = \{\theta/\omega\}, \tag{3.6} $$ + +where $\sigma_2$ is the matrix defined in (1.7). + +A renormalization formula similar to (3.5) was stated without proof in [8]. Formula (3.5) relates the solution $P(M, \omega, \theta)$ of equation (3.4) to the solution $P(M_1, \omega_1, \theta_1)$ of the equation of the same form but with the matrix $M_1$ and the parameters $\omega_1$ and $\theta_1$ instead of $M, \omega$ and $\theta$. + +*Proof*. In the case of $N=0$, the statement is obvious. Assume that $N>0$ (the case of $N<0$ is treated similarly). Equation (3.1) implies that + +$$ \Psi(\theta + N\omega) = M(\theta + (N-1)\omega) M(\theta + (N-2)\omega) \dots M(\theta) \Psi(\theta) = P_N(M, \omega, \theta) \Psi(\theta). $$ + +The solution $\Psi$ being fundamental, the matrix $\Psi(\theta)$ is invertible. Therefore, + +$$ P_N(M, \omega, \theta) = \Psi(\theta + N\omega)\Psi^{-1}(\theta). \tag{3.7} $$ \ No newline at end of file diff --git a/samples/texts/3931795/page_9.md b/samples/texts/3931795/page_9.md new file mode 100644 index 0000000000000000000000000000000000000000..250a3a42c959566bb1454f170ec39299c0adde92 --- /dev/null +++ b/samples/texts/3931795/page_9.md @@ -0,0 +1,33 @@ +The definition of the monodromy matrix $M_1$ implies that $\Psi(x) = \Psi(x-1)M_1^t(\frac{x-1}{\omega})$. Using this relation to express $\Psi(\theta + N\omega)$ in terms of $\Psi(\{\theta + N\omega\})$, we get + +$$P_N(M, \omega, \theta) = \Psi(\{\theta + N\omega\}) M_1^t \left( \frac{\theta + N\omega - [\theta + N\omega]}{\omega} \right) \cdots \\ \qquad \cdots M_1^t \left( \frac{\theta + N\omega - 2}{\omega} \right) M_1^t \left( \frac{\theta + N\omega - 1}{\omega} \right) \Psi^{-1}(\theta).$$ + +Taking into account the 1-periodicity of $M_1$ and using (3.6), we arrive at the formula +$P_N(M, \omega, \theta) = \Psi(\{\theta + N\omega\}) M_1^t(\theta_1 + N_1\omega_1) \cdots M_1^t(\theta_1 - 2\omega_1) M_1^t(\theta_1 - \omega_1) \Psi^{-1}(\theta)$. +For any $A \in SL(2, \mathbb{C})$, one has $A^t = \sigma_2 A^{-1}\sigma_2$. Therefore, + +$$\begin{aligned} \Psi(\{\theta + N\omega\}) \sigma_2 M_1^{-1}(\theta_1 + N_1\omega_1) \dots M_1^{-1}(\theta_1 - 2\omega_1) M_1^{-1}(\theta_1 - \omega_1) \sigma_2 \Psi^{-1}(\theta) &= \\ &= \Psi(\{\theta + N\omega\}) \sigma_2 P_{N_1}(M_1, \omega_1, \theta_1) \sigma_2^{-1} \Psi^{-1}(\theta). \end{aligned}$$ + +This implies the statement of the theorem. $\square$ + +### 3.2. Monodromization and the Maryland equation. + +3.2.1. *Monodromy matrix for the complex Maryland equation.* Let $\psi$ be the minimal meromorphic solution of the complex Maryland equation described in Theorem 2.2. In terms of $\psi$, we construct the matrix $\Psi$ as in (1.7). One has + +**Lemma 3.1.** The function $z \to \Psi(z, \eta, l)$ solves the equation + +$$\Psi(z + \omega) = F(z, \eta, l)\Psi(z), \quad z \in \mathbb{C}. \tag{3.8}$$ + +If $\eta \notin \omega\mathbb{Z}$, the solution $\Psi$ is fundamental. + +*Proof.* The first statement is obvious. As $\det \Psi$ equals the Wronskian of $\psi$ and $\psi(\cdot - 1)$, the second statement follows from Lemma 2.1. $\square$ + +**Remark 3.1.** By Theorem 2.2, the solution $\Psi$ analytically depends on $\eta$. + +The definition of the monodromy matrix and Theorem 2.4 imply + +**Theorem 3.2.** If $\eta \notin \omega\mathbb{Z}$, the monodromy matrix corresponding to the fundamental solution $\Psi(\cdot, \eta, l)$ equals $F(\cdot, \eta_1, l_1)$, where $\eta_1$ and $l_1$ are defined by the formulas in (1.6). + +3.2.2. **Invariance with respect to monodromization.** Theorem 3.2 means that the matrix Maryland equation (3.8) is invariant with respect to monodromization: after monodromization, it appears to be transformed into the matrix Maryland equation with new parameters. Like (3.8), the latter is equivalent to a complex Maryland equation (1.8) (the equation with new parameters), and one can say that the complex Maryland equation is invariant with respect to monodromization. Actually, it is this invariance that leads to the renormalization formula (1.5). + +In [3], the authors consider difference equations on $\mathbb{C}$ the coefficients of which are trigonometric polynomials, and describe equation families invariant with respect to monodromization. One of these families contains an equation related to the famous Almost Mathieu equation (in the same way as the Maryland equation is related to the complex Maryland equation). However, in general case, investigation of the trigonometric polynomial coefficients transformation, which occurs in result of monodromization, appears to be a very non-trivial problem. Only for very special trigonometric polynomials, this transformation is known to be elementary [8]. The \ No newline at end of file diff --git a/samples/texts/4157293/page_1.md b/samples/texts/4157293/page_1.md new file mode 100644 index 0000000000000000000000000000000000000000..280c01b420ca7ba8416fbca572fc2d4377aa4d06 --- /dev/null +++ b/samples/texts/4157293/page_1.md @@ -0,0 +1,16 @@ +# An Energy Efficient Adaptive HELLO Algorithm for Mobile Ad Hoc Networks + +Dancing He, Nathalie Mitton, David Simplot-Ryl + +► To cite this version: + +Dancing He, Nathalie Mitton, David Simplot-Ryl. An Energy Efficient Adaptive HELLO Algorithm for Mobile Ad Hoc Networks. The 16th ACM/IEEE International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems (MSWIM), Nov 2013, Barcelona, Spain. hal-00850350 + +HAL Id: hal-00850350 +https://hal.inria.fr/hal-00850350 + +Submitted on 2 Dec 2013 + +**HAL** is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. + +L'archive ouverte pluridisciplinaire **HAL**, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. \ No newline at end of file diff --git a/samples/texts/4157293/page_2.md b/samples/texts/4157293/page_2.md new file mode 100644 index 0000000000000000000000000000000000000000..bff690bbc42d85355910659cc71c909f18be540f --- /dev/null +++ b/samples/texts/4157293/page_2.md @@ -0,0 +1,39 @@ +# An Energy Efficient Adaptive HELLO Algorithm for Mobile Ad Hoc Networks + +Dancing He +Centro de Electronica Industrial +Universidad Politecnica de Madrid +dancing.he@upm.es + +Nathalie Mitton and David Simplot-Ryl +Inria Lille - Nord Europe +{firstname.lastname}@inria.fr + +## ABSTRACT + +HELLO protocol or neighborhood discovery is essential in wireless ad hoc networks. It makes the rules for nodes to claim their existence/aliveness. In the presence of node mobility, no fix optimal HELLO frequency and optimal transmission range exist to maintain accurate neighborhood tables while reducing the energy consumption and bandwidth occupation. Thus a Turnover based Frequency and transmission Power Adaptation algorithm (TFPA) is presented in this paper. The method enables nodes in mobile networks to dynamically adjust both their HELLO frequency and transmission range depending on the relative speed. In TFPA, each node monitors its neighborhood table to count new neighbors and calculate the turnover ratio. The relationship between relative speed and turnover ratio is formulated and optimal transmission range is derived according to battery consumption model to minimize the overall transmission energy. By taking advantage of the theoretical analysis, the HELLO frequency is adapted dynamically in conjunction with the transmission range to maintain accurate neighborhood table and to allow important energy savings. The algorithm is simulated and compared to other state-of-the-art algorithms. The experimental results demonstrate that the TFPA algorithm obtains high neighborhood accuracy with low HELLO frequency (at least 11% average reduction) and with the lowest energy consumption. Besides, the TFPA algorithm does not require any additional GPS-like device to estimate the relative speed for each node, hence the hardware cost is reduced. + +## Categories and Subject Descriptors + +C2.1 [Network Architecture and Design]: Network topology + +## General Terms + +Algorithms + +Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. +MSWiM'13, November 3-8, 2013, Barcelona, Spain. +Copyright 2013 ACM 978-1-4503-2353-6/13/11 +Enter the DOI string/url from the ACM e-form confirmation ...$15.00. + +## Keywords + +Adaptive Hello Algorithm; Energy Efficient; Mobile Ad Hoc Network; Neighborhood Discovery; Transmission Range; Wireless Sensor Networks + +## 1. INTRODUCTION + +Wireless Sensor Networks (WSNs) are composed of sensor nodes which have the ability to detect surrounding environments and are able to communicate with each other through specific wireless protocols. Mobility of the sensor nodes is an emerging issue and nowadays it becomes more and more under the attention of scientific community. It arises new questions such as optimization of energy consumption, connectivity of the WSN, routing in mobile networks and a lot more. Sensor nodes can be attached to animals with the purpose to track animals' habits and their natural habitat. In this case, mobility pattern is very often hard to estimate and protocols built upon this issue must take account of possible losses of connectivity or delays in the transmission of messages. Other possibility would include mobile agents (robots) which may have been given mobile pattern to follow to improve some of the parameters in the WSN [7]. In this case, overall energy efficiency of transmission can be significantly augmented using controlled mobility based smart placement of nodes related to the routing path. Due to the specific nature of mobile networks, some of the characteristic mechanisms used in static WSNs need to be redefined and adapted to the specific types of node mobility. + +Neighborhood discovery is one of the most important protocols in WSN. The mechanism behind this protocol is rather simple, it includes periodically sending a specific type of message, called HELLO messages (also known as beacon messages) and gathering data from the received HELLO messages. HELLO messages contain the data of the sender id, unique identification number for the node in the WSN-usually MAC address in practical applications. Each node, usually, acquires data from all HELLO messages that it has received and organizes them into the neighborhood table which can be further used for some kind of topology control [10] or proactive routing [4]. + +However, finding the proper HELLO frequency is not obvious: if the HELLO frequency is too low, nodes may not be detected by their neighbors, leading to deprecated neighborhood tables, and protocol failures are likely to occur. On the contrary, if the frequency is too high, neighborhood tables are up to date, but the energy and bandwidth are wasted. \ No newline at end of file diff --git a/samples/texts/4157293/page_3.md b/samples/texts/4157293/page_3.md new file mode 100644 index 0000000000000000000000000000000000000000..d82139b0feef1ac696f12fc8d24ff2c7b9fdb430 --- /dev/null +++ b/samples/texts/4157293/page_3.md @@ -0,0 +1,47 @@ +There exist many algorithms that utilize the information gathered from additional device such as GPS. The ARH algorithm presented in [6] adapts HELLO frequency when there is obvious error between the estimated speed and actual speed read from GPS. In [5], the authors deduced the relationship between the HELLO frequency and the link failure possibility for a given speed which again is obtained from GPS. The proper HELLO frequency is then calculated to guarantee that the link failure possibility is not greater than the predefined threshold. However, the speed obtained from GPS can not represent the actual changes in the surrounding environment. When a node is static or moving slowly but its neighbors are changing frequently, the HELLO frequency calculated by the previous two algorithms is lower than the required value to maintain the expected accuracy. On the contrary, when a node is moving fast with a relatively static neighborhood table, the computed HELLO frequency will be higher than the actual required value and will consume more energy. + +The authors in [3] proposed a Turnover based Adaptive hello Protocol (TAP) to reduce the hardware cost and to consider the relative changes among nodes. It calculates the number of changes that appears in the neighborhood table, called *turnover*. The optimal value of *turnover* is derived and each node adapts the HELLO frequency by increasing or decreasing the value within a fixed range to keep the *turnover* as close as possible to the optimal value. In [9], the authors proposed two algorithms, one is run with GPS called *Cost*, the other called *NoTAP* relies on the TAP algorithm. Both algorithms adapt the HELLO transmission range to reduce the energy consumption. *Cost* risks sending HELLO messages with a very low power. Although the neighborhood table of *Cost* is accurate and power consumption is low, the average number of neighbors is too small to guarantee the connectivity of the network. + +In this paper, a Turnover based Frequency and transmission Power Adaptation algorithm (TFPA) is presented. It is based on a previous work and allows to dynamically adapt the HELLO frequency and its transmission power based on the estimated relative speed for each node. The contributions of this work are: + +* a theoretical analysis allowing the computing of the relative speed from the *turnover* in neighborhood, + +* a theoretical analysis of the global energy optimal transmission range of HELLO message. The analysis highlights that the optimal range is independent of the speed, and hence can be applied to all nodes in the network to reduce the overall energy consumption, + +* an adaptive neighborhood discovery mechanism, TFPA, that dynamically adapts jointly node transmission range and HELLO frequency, allowing accuracy and energy efficiency. + +In TFPA, every node regularly checks its neighborhood and based on the observed changes and the theoretical analysis results, every node dynamically and periodically adapts both of its transmission range and HELLO frequency, allowing low energy consumption while maintaining reliable neighborhood tables. The features of TFPA are as follows: + +* **Local:** in TFPA, every node only watch its neighborhood to adapt range and frequency. + +* **Distributed:** every node computes the same algorithm. + +* **GPS-free:** TFPA is not constraint by GPS-like devices and can be applied to general mobile ad hoc network. + +* **Energy-efficient:** Results show that applying TFPA allows up to 11% energy savings. + +* **Reliable:** Results show that neighborhood tables achieved through TFPA present the same error and accuracy ratios than alternative neighborhood discovery protocols. + +The remainder of the paper is organized as follows: Section 2 sets the models, notations and preliminaries to this work. Section 3 provides the theoretical analysis related to this work. Section 4 presents the TFPA algorithm. Simulation results are detailed in Section 5. Section 6 gives conclusion and directions of future work. + +# 2. MODELS AND PRELIMINARIES + +## 2.1 Models and Notations + +Wireless networks are represented by a graph $G = (V, E)$ where $V$ is the set of nodes and $E \subseteq V^2$ is the set of edges: $(u, v) \in E$ means that $u$ and $v$ are neighbors (i.e., close enough to communicate). The neighborhood set $N(u)$ of a vertex $u$ is equal to + +$$v : (u, v) \in E \lor (v, u) \in E.$$ + +Each node is assigned a unique identifier (e.g., a MAC address). Wireless links are determined by the physical model. The most frequent one is the unit disk graph model [1]: + +$$E = \{(u, v) \in V^2 | u \neq v \land |uv| \leq R_{max}\}$$ + +with $|uv|$ being the Euclidean distance between nodes $u$ and $v$, and $R_{max}$ the maximum communication range. + +The basic HELLO protocol, first described in the OSPF [8], works as follows: Nodes regularly send HELLO messages to signal their presence to nodes near by, and maintain a neighborhood table. The frequency of these messages is noted $f_{HELLO}$ and the delay between them $d_{HELLO}$ (i.e., $d_{HELLO} = 1/f_{HELLO}$). When a node $u$ receives such a message from a node $v$, $u$ adds $v$ to its table, or updates the time stamp of the entry if $v$ was already there. We do not make assumptions about the content of HELLO messages, but they must contain the identifier of the sender. + +The relation between HELLO frequency and relative speed is formulated (named Fopt for simplicity) in our previous work [11]. The idea of Fopt is that a node which strides a given distance in the communication area of another node has to be detected with a certain chance. If two nodes with transmission range $R$, move with a relative speed $S$, to make them discover each other with a chance of $1 - \alpha$, the optimal HELLO frequency is: + +$$f_{opt} = \frac{2S}{\alpha R} \quad (1)$$ + +where $\alpha < 1$ predefines an expectation of accuracy of neighborhood table, e.g. the expected accuracy is 90% when $\alpha = 0.1$. \ No newline at end of file diff --git a/samples/texts/4157293/page_4.md b/samples/texts/4157293/page_4.md new file mode 100644 index 0000000000000000000000000000000000000000..aeb4202d81993c3421ef321c464c5f3641d5ef76 --- /dev/null +++ b/samples/texts/4157293/page_4.md @@ -0,0 +1,53 @@ +Figure 1: A new neighbor v of a node u - Global view [3]. + +## 2.2 Turnover + +The concept of *turnover* is first defined by [3], which is expressed by the ratio of new neighbors of a sensor and the number of all its neighbors between two HELLO packets. We re-establish part of the analysis provided in [3] that will be useful for the follow-up of this paper. In particular, we return the analysis of the optimal turnover ratio. + +Let $u_0$ (resp. $v_0$) be the position of node $u$ (resp. $v$) at time $t_0$ and $u_1$ (resp. $v_1$) be its position at time $t_1$, the probability $P(d)$ that node $v$ is a new neighbor of $u$ is calculated through finding the probability that $|u_0v_1| > R$ knowing that $|u_1v_0| < R$. Fig. 1 illustrates the mobility model. Given the new position of a node and its traveling distance $\Delta d = S \times \Delta t$, the blue dashed circle $C_{u_1,\Delta d}$ (resp. red dotted circle $C_{v_1,\Delta d}$) represents the possible position of $u_0$ (resp. $v_0$). There exists a minimum distance $d_{min} = min(0, R-2\Delta d)$ that if $d < d_{min}$, $v$ was an existing neighbor of $u$ and $P(d < d_{min}) = 0$. Only the node $v$ comes from the dotted blue angular sector of Fig. 2 can become a new neighbor of $u$. The $P(d)$ is expressed as: + +$$ P(d) = \begin{cases} \frac{1}{\pi^2} \int_{\omega_{min}}^{\pi} \theta_{max} d\omega & \text{if } d_{min} < d < R \\ 0 & \text{otherwise} \end{cases} \quad (2) $$ + +where $\theta_{max} = 2 \arccos(\frac{R^2-\Delta d^2-k^2}{2k\Delta d})$ is the maximum angle of the dotted blue sector, $k = \sqrt{\Delta d^2 + d^2 - 2d\Delta d \cos\omega}$ and $k > R - \Delta d$ which leads to $\omega_{min} = \arccos(\frac{d^2+2R\Delta d-R^2}{2d\Delta d})$ according to the law of cosines. + +In this analysis, it is assumed that nodes are randomly deployed using a Poisson Point Process (node positions are independent) with a density $\lambda > 0$, $\lambda$ being the mean number of nodes per surface unit. The expected number of new neighbors that node $u$ encounters after a time period $\Delta t$ is simply equal to: + +$$ E[n]_{\Delta t} = \int_{d=0}^{R} 2\lambda\pi P(d)dd \quad (3) $$ + +By substituting (2) into (3), we have: + +$$ E[n]_{\Delta t} = \frac{\lambda}{\pi} \int_{d_{min}}^{R} \int_{\omega_{min}}^{\pi} d * \theta_{max} d\omega dd \quad (4) $$ + +In this work, $E[n]_{\Delta d}$ is used as a synonym of $E[N]_{\Delta t}$. The *turnover* of node *u* is expressed by the ratio of the new + +Figure 2: A new neighbor v of a node u - Zoom [3]. + +neighbors and number of all neighbors between two HELLO packets: + +$$ r = \frac{E[n]_{\Delta d}}{\lambda\pi R^2} \quad (5) $$ + +The above procedure shows how to deduce *r* from the neighborhood observation in the previous work [3]. Afterwards, we approximate the integration expression of *r* and the relative speed *S* is derived. + +# 3. THEORETICAL ANALYSIS OF HELLO FREQUENCY AND RANGE ADAPTATION + +## 3.1 Relationship between Turnover Ratio and Relative Speed + +Fig. 3 plots the variation of $E[n]_{\Delta d}$ with different $R$ and $\Delta d$. As can be seen, for a given $\Delta d$, $E[n]$ increases as $R$ increases. Note that the curves vary almost linearly with $R$ thus they can be approximated as linear variation and we have $E[n]_{\Delta d}$ expressed as: + +$$ E[n]_{\Delta d} = \frac{\lambda}{\pi} l(\Delta d) * R $$ + +where $l(\Delta d)$ is the slope of a line – a function of $\Delta d$. By checking its value for different $\Delta d$, it has $l(5) = 13\pi$, $l(10) = 26\pi$, $l(20) = 51\pi$, thus $l(\Delta d) = 2.6\pi\Delta d$ and we have $\frac{\lambda}{\pi}l(\Delta d) \approx 2.6\lambda\Delta d$. Therefore, $E[n]_{\Delta d}$ can be expressed as: + +$$ E[n]_{\Delta d} = 2.6\lambda\Delta dR \quad (6) $$ + +By substituting (6) to (5), we obtain the approximated expression for turnover ratio: + +$$ r = \frac{2.6\Delta d}{\pi R} \quad (7) $$ + +If $\Delta t = \frac{1}{f_{opt}}$, which is the optimal HELLO period (1) defined by Fopt, the $\Delta d = S \times f_{opt} = \frac{\alpha R}{2}$, then we have + +$$ r_{opt} = \frac{1.3\alpha}{\pi} $$ + +As can be noticed, $r_{opt}$ is only a function of $\alpha$ and is not dependent on $S$, $R$ and $\lambda$. For different $\alpha$: + +$$ r_{opt}(0.1) \approx 0.04, r_{opt}(0.2) \approx 0.08, r_{opt}(0.3) \approx 0.12 $$ \ No newline at end of file diff --git a/samples/texts/4157293/page_5.md b/samples/texts/4157293/page_5.md new file mode 100644 index 0000000000000000000000000000000000000000..afc25d787438dac04ce5496da6e155a56b073381 --- /dev/null +++ b/samples/texts/4157293/page_5.md @@ -0,0 +1,39 @@ +Figure 3: $E[n]_{\Delta d}$ varies with R for different $\Delta d$ + +Figure 4: $r_{opt}$ drawn by TAP [3] + +When compared to the *turnover* curves drawn in TAP (see Fig. 4), the results of the simplified expression of $r_{opt}$ correlate very well with the curves. Therefore the linear approximation of the complex integration expression in 4 is validated to be correct. The analysis will then be benefit from the linear approximated expression. + +In practical application, the *turnover r* is obtained through analyzing the receiving HELLO messages for a time period $\Delta t$. Note that (7) can be represented as $r = \frac{2.6\Delta t \times S}{\pi R}$. Once r is provided, the relative speed S is calculated by + +$$S = \frac{r\pi R}{2.6\Delta t} \quad (8)$$ + +Afterwards, the HELLO frequency $f_{HELLO}$ and transmission range R are calculated by (1) and the adaptation solution are available to the each node. + +We provide results that demonstrate the correctness of the theoretical analysis for speed estimation. In Fig. 5 the blue line represents the speed estimated through (8) **by observing the turnover** and the red line represents the speed when TAP is implemented. At the beginning of establishing the network, each sensor node is a new comer to their "neighbors", and many new neighbors are detected + +Figure 5: The comparison of speed estimated by 8 and the speed calculated by (1) in TAP + +in a very short time which results in a steep rise of *turnover*, thus *S* is much higher than the following stable periods. The results correlate well between both methods when the network enters relative stable state. However the speed that calculated from TAP is much smaller than directly computed through (8) at the beginning, as there is a constraint to adapt the $f_{HELLO}$ within a certain range which results TAP not as sensitive as (8) to the dynamic changes in the network. + +## 3.2 Minimizing Energy Consumption + +The energy spent in period of time $\Delta t$ by a node $u$ can be expressed as the number of sent messages sent in $\Delta t$ multiplied by the energy cost of sending one message. The number of messages sent by $u$ during $\Delta t$ is equal to $\Delta t \cdot f_u(R_u, t)$ where $f_u(R_u, t)$ is the HELLO frequency of node $u$ and $R_u$ is the communication range of node $u$ at time $t$. Energy consumption varies with both HELLO frequency and transmission range of sensor nodes. In this work, the energy consumption model of sending a HELLO message employs the one given by [2]: + +$$E(R) = R(t)^A + C$$ + +where $A(> 1)$ represents the signal attenuation coefficient along the distance, $C$ is the overhead due to signal processing. A and C are constant as long as the deployed environment is homogeneous and the lengths of HELLO messages are identical. Based on the model, the energy spent by a sensor node on sending HELLO message during $\Delta t$ is: + +$$cost_{\Delta t}(R) = (R^A + C)f\Delta t \quad (9)$$ + +by substituting (1) into (9), we have: + +$$cost_{\Delta t}(R) = (R^A + C)\frac{2S}{\alpha R}\Delta t \quad (10)$$ + +The objective becomes finding $R_{opt}$ for different $S$ that gives the minimum value of $cost_{\Delta t}$, the solution is obtained by solving the partial derivative of (10) when it equals 0: + +$$\frac{\partial cost_{\Delta t}(R)}{\partial R} = \left[ (A-1)R^{A-2} - \frac{C}{R^2} \right] \frac{2S}{\alpha} \Delta t = 0 \quad (11)$$ + +therefore we have: + +$$R_{opt} = \sqrt[A]{\frac{C}{A-1}} \quad (12)$$ \ No newline at end of file diff --git a/samples/texts/4157293/page_6.md b/samples/texts/4157293/page_6.md new file mode 100644 index 0000000000000000000000000000000000000000..62d76446c74dc664d84cfa9775b409f3d374c693 --- /dev/null +++ b/samples/texts/4157293/page_6.md @@ -0,0 +1,43 @@ +Figure 6: Curves of cost(R) for different speeds + +Because the second derivative of costΔt is + +$$ +\frac{\partial^2 \text{cost}_{\Delta t}(R)}{\partial R^2} = \left[ (A-1)(A-2)R^{A-3} + \frac{2C}{R^3} \right] \frac{2S}{\alpha} \Delta t \quad (13) +$$ + +When $A \ge 0$, the value of $\frac{\partial^2 \text{cost}_{\Delta t}(R)}{\partial R^2}|_{R=R_{\text{opt}}} > 0$. Since $A$ is always larger than 1, it is guaranteed that $\text{cost}_{\Delta t}(R)$ reaches the global minimum value at $R_{\text{opt}}$. Fig. 6 shows the variations of energy consumption with $R$ under three different speed levels. With $C = 2.25 \times 10^4$ and $A = 2$, the $R_{\text{opt}}$ is equal to 150 m, and as expected, the minimum energy cost of each speed level is obtained when $R = R_{\text{opt}}$. + +We notice from 12 that $R_{opt}$ only varies with $C$ and $A$, and it is independent of $S$, which concludes that theoretically all the sensor nodes in the same network should keep transmission range as close as possible to $R_{opt}$ in order to minimize the energy cost and maintain high accuracy of neighborhood table simultaneously. + +Figure 7: The example of valid region and curves of 1 for different S + +## 4. THE TFPA ALGORITHM + +As discussed in the previous section, all the sensor nodes are supposed to transmit the HELLO messages with communication range close to $R_{opt}$ in order to minimize the energy cost. However, due to the constraint on hardware device, there exists a maximum transmission range ($R_{max}$), and for the sake of reliable communication there is a minimum transmission range ($R_{min}$). Therefore, before implementing the algorithm, the value of $R_{opt}$ should be checked. If $R_{opt} \in [R_{min}, R_{max}]$ the TFPA algorithm can be implemented directly, otherwise, if $R_{opt} \notin [R_{min}, R_{max}]$ the global minimum cost is not achievable. When $R_{opt} < R_{min}$, the local minimum cost is obtained at $R_{min}$, we set $R_{opt} = R_{min}$. If $R_{opt} > R_{max}$, the local minimum cost is obtained at $R_{max}$, thus $R_{opt} = R_{max}$. After truncating $R_{opt}$ to the valid region, the TFPA algorithm can be implemented. + +It should be noticed that, the change of $R$ can be realized by modifying transmission power of antenna. There are several models for wireless communications such as free-space model, log-normal model, indoor multi-wall model and so on. Through computing the minimum transmit power to reach the computed $R$, the adaptation of TFPA is easily obtained. Besides, the $f_{HELLO}$ also has an + +adaptation range: [$f_{min}, f_{max}$]. Fig. 7 demonstrates the valid region (gray rectangle) in which the $f_{HELLO}$ and $R$ can be adjusted. The curves of (1) for different $S$ are shown in the figure as well, and the adaptations from time $t_k$ to $t_{k+2}$ will be explained later. + +In the previous section we indeed showed that the *turnover* may be very small (e.g., $r = 0.04$ when $\alpha = 0.1$), while it is nearly impossible for a node to practically observe such a small *turnover* between two successive HELLO messages. A solution to this problem is to let nodes archive more than one table into a history of size $X$: if $X$ is sufficiently large, then a correct value may be expected. The turnover will then be computed by counting neighbors present in the most recent table that are not present in the older ones and by using the current HELLO delay as: + +$$ +r = \frac{\text{nb of new neighbor}}{\text{total nb of neighbors}} \cdot \frac{d_{\text{HELLO}}}{\Delta t} \quad (14) +$$ + +The implementation of TFPA algorithm is consist of two steps: *Training step* and *Adaptation step*. + +In the training step, the procedure is run for $X = 10$ periods at each node to build up the historic data about neighbors which is similar as the TAP and NoTAP algorithms. It also converges the frequency and communication range to a certain level and then serves better for the adaptation step. + +The adjustment of $f_{HELLO}$ is calculated through the period between two HELLO messages $d_{HELLO}$, where $f_{HELLO} = \frac{1}{d_{HELLO}}$: + +$$ +d_{\text{HELLO}} = \begin{cases} d_{\text{HELLO}} + \frac{d_{\text{HELLO}}}{f_{\text{factor}}} \cdot g(r) & \text{if } r < r_{\text{opt}} \\ d_{\text{HELLO}} - \frac{d_{\text{HELLO}}}{f_{\text{factor}}} \cdot g(r) & \text{otherwise} \end{cases} \quad (15) +$$ + +Function $g(r)$ is retrieved using turnover to reflect the distance between $r$ and $r_{opt}$: + +$$ +g(r) = \begin{cases} \left(\frac{r-r_{\text{opt}}}{r_{\text{opt}}} \right)^2 & \text{if } r < 2 \cdot r_{\text{opt}} \\ 1 & \text{otherwise} \end{cases} \tag{16} +$$ \ No newline at end of file diff --git a/samples/texts/4157293/page_7.md b/samples/texts/4157293/page_7.md new file mode 100644 index 0000000000000000000000000000000000000000..0d26b1b83e40f47d5520e8cbd16c5ebbfe57076e --- /dev/null +++ b/samples/texts/4157293/page_7.md @@ -0,0 +1,56 @@ +Hence, the maximum change of $d_{HELLO}$ at each adjustment is limited by the value of $\frac{d_{HELLO}}{f_{factor}}$, $f_{factor} > 0$. The pseudo code of *training step* is shown as follows: + +**TFPA: Training step** + +1 **while** $X < 10$ **do** +2    Calculate $r$ +3    **if** $r < r_{opt}$ +4        Decrease $f_{HELLO}$ by (15) +5    **else if** $r > r_{opt}$ then +6        Augment $f_{HELLO}$ by (15) +7 **end if** +8 $X++$ +9 **end while** + +The *Adaptation step* is run after finishing the *training step*. At each adaptation interval, each node computes the relative speed *S* from current turnover ratio *r* by using (8). The *fHELLO* and *R* are then adjusted directly by (1) within the valid region. An example is shown in Fig. 7, at time $t_{k+1}$, the estimated speed $\tilde{s}_{k+1}$ is higher than $\tilde{s}_k$, the curve intersects with $R_{opt} = 150$ m in the valid region, thus $R_{k+1}$ is unchanged and equals $R_{opt}$; while at $t_{k+2}$, the $\tilde{s}_{k+2}$ is too high that the intersection with $R_{opt} = 150$ m is outside the valid region. However there is still a small part of the curve in the valid region, and the minimum cost will be obtained at $R_{k+2}$ according to the theoretical analysis, therefore $f_{k+2}$ is assigned with the maximum value $f_{max}$. The pseudo code of *Adaptation step* is shown below: + +**TFPA: Adaptation step** + +1 **if** time to send HELLO packet **do** +2    Calculate $r$ +3    Calculate $S$ by (8) +4 **if** $\tilde{H}_{optmin} \leq \tilde{H} = \frac{2S}{\alpha} \leq \tilde{H}_{optmax}$ **do** +5        $R = R_{opt}$ +6    $$f_{HELLO} = \frac{\tilde{H}}{R}$$ +7 **else if** $\tilde{H} > \tilde{H}_{optmax}$ **do** +8        $f_{HELLO} = f_{max}$ +9    $$R = min(\frac{\tilde{H}}{f_{HELLO}}, R_{max})$$ +10 **else if** $\tilde{H} < \tilde{H}_{optmax}$ **do** +11        $f_{HELLO} = f_{min}$ +12    $$R = max(\frac{\tilde{H}}{f_{HELLO}}, R_{min})$$ +13 **end if** +14 **end if** + +Note that $\tilde{H}_{optmax} = f_{max}R_{opt}$ and $\tilde{H}_{optmin} = f_{min}R_{opt}$. + +Figure 8: Validate the minimum cost curve + +the consistency of neighborhood tables among nodes with minimum cost. Thus in addition to HELLO frequency and power consumption, we use two evaluation metrics: *neighborhood accuracy* and *neighborhood error*. Assuming that $N(u)$ is the set of actual neighbors of a node $u$, and $N'(u)$ the set of neighbors known to $u$ (i.e. whose identifier is present in its neighborhood table), these two metrics are defined below. Notice that $acc(u)+err(u)$ is not necessarily equal to 1. + +**Definition 1.** Neighborhood accuracy $acc(u)$ is the proportion of actual neighbors of node u that have been indeed detected by u. + +$$acc(u) = \frac{|N(u) \cap N'(u)|}{|N'(U)|} \times 100.$$ + +**Definition 2.** Neighborhood error $err(u)$ measures both how many neighbors of node u have not been detected, and how many "false neighbors" remain in its neighborhood table ( i.e. old neighbors that have not been removed). + +$$err(u) = \frac{|N(u) \setminus N'(u)| + |N'(u) \setminus N(u)|}{|N(u)|} \times 100.$$ + +First of all, the minimum cost model stated in Sec.3.2 is validated. In the simulation, 100 nodes were randomly distributed in a square area of size $1000 \text{ m} \times 1000 \text{ m}$ and the maximum speed of nodes is varied with 3 levels: 0 ~ 3 m/s, 0 ~ 5 m/s and 0 ~ 10 m/s. The propagation employs the free-space model, and $A = 2$, $C = 2.25 \times 10^4$, therefore $R_{opt} = 150$ m by substituting A and C in (12) and $\lambda = \frac{100}{1000 \times 1000}$. Fig. 8 shows the energy consumption by the TFPA protocol. For each level, different $R'_{opt} \in [50 \text{ m}, 250 \text{ m}]$ are tested. Simulation lasts 100 s for each $R'_{opt}$. As can be seen, the energy cost at $R_{opt} = 150$ m is the minimum for all the mobility level and the curve varies similarly as Fig. 6, therefore the theoretical analysis on the $R_{opt}$ is proved. + +To evaluate the performance of the TFPA algorithm, we chose to compare it to four other comparable schemes: TAP algorithm[3], NoTAP algorithm[9], Fopt algorithm[11] and ARH algorithm [6]. The TFPA, TAP and NoTAP do not + +# 5. PERFORMANCE EVALUATION + +In this section, the proposed TFPA algorithm is evaluated through simulation by using the WSNET¹ simulator. Because the purpose of having a HELLO algorithm is for neighborhood discovery, the algorithm must be able to keep + +¹http://wsnet.gforge.inria.fr/ \ No newline at end of file diff --git a/samples/texts/4157293/page_8.md b/samples/texts/4157293/page_8.md new file mode 100644 index 0000000000000000000000000000000000000000..0a6abc53d5fcfc3b76f6d7099eb34dcb9176bdb5 --- /dev/null +++ b/samples/texts/4157293/page_8.md @@ -0,0 +1,23 @@ +Figure 9: Real mobility trace from pedestrian runners (two examples) + +require nodes equipped with GPS. While Fopt sets $f_{HELLO}$ based on the current speed and the ARH sends HELLO message when the error between the prediction of location and the real location is greater than a threshold, thus GPS or a localization module is needed for both Fopt and ARH algorithms. + +The nodal mobility trace are generated based on log files obtained from real experiments on pedestrian runners². The moving speed of each node was spread around a mean value of 3 m/s. Two examples of the mobility trace are shown in Fig. 9, where one curve stands for one runner. A varying number of nodes (from 100 to 600) were deployed in a 1000 m × 1000 m square region. These nodes have the same optimal transmission range $R_{opt} = 100$ m and $R_{min} = 50$ m, $R_{max} = 150$ m. For each network size, we conducted 100 simulation runs and obtain the average results with 95% confidence intervals. + +Fig. 10(a) and Fig. 10(b) show the accuracy and error of the neighborhood table respectively. The TFPA algorithm performs slightly better than the TAP, Fopt and ARH, but slightly worse than NoTAP. + +However, if take a look at Fig. 10(c) which compares the value of $f_{HELLO}$ with regards to different number of nodes, we observe that the HELLO messages sent by TFPA and ARH algorithms are much less than the others. The TFPA performs better than ARH when node density is lower, while ARH performs slightly better as the number of node increases. However, in average, the $f_{HELLO}$ of ARH is about 11% higher than TFPA. The $f_{HELLO}$ of Fopt is 90% higher than that of TFPA, the NoTAP and the TAP are 80% and 60% higher than the $f_{HELLO}$ of TFPA respectively. Most of the Fopt and ARH algorithms perform consistently, but with different explanations: For Fopt algorithm, the average speed of each node is almost constant, thus the average $f_{HELLO}$ is almost constant; for ARH, the HELLO frequency depends on the error that a node detects between its position estimate and its real position, which are both independent from the number of neighbors and the total number of nodes. The curves of TFPA, TAP and NoTAP algorithms have similar variation trend and they climb slightly as the node density increases, + +which indicates that the dynamic of surrounded neighbors of each sensor node increases. + +Fig. 10(d) demonstrates the *turnover* for all the methods except the ARH (which can not explain the *turnover*). As expected, all the turnover ratios are close to 0.04 which is in coincident with the theoretical analysis. The Fopt algorithm provides highest $f_{HELLO}$, thus the turnover is the lowest. The TFPA gives the lowest $f_{HELLO}$, thus the turnover is the highest. However, the turnover ratio of Fopt decreases as it only focuses on the absolute speed of each node and ignores the relative change on the neighbors. On the contrary, the turnovers of the TFPA, TAP and NoTAP are relatively stable because again they take into account the mobilities of the neighbors. + +The remaining energy is shown in Fig. 10(e). Although the $f_{HELLO}$ of NoTAP algorithm is less than that of Fopt, its energy consumption is the highest. Since Fopt sends HELLO messages always with $R_{opt}$ and NoTAP intends to adapt $R$ to achieve high accuracy, the reduction on the $f_{HELLO}$ can not compensate the consumption on modifying the transmission range, hence the overall energy cost is at the end the highest compared to others. Meanwhile, we observe that the TFPA costs least energy among the algorithms. This can be explained by the factor that in addition to the lowest $f_{HELLO}$, the TFPA only adapts $R$ when the computed $f_{HELLO}$ reaches the boundary of valid region, while the NoTAP algorithm adapts $R$ whenever $f_{HELLO}$ changes. Therefore, with the lowest $f_{HELLO}$ and quasi global optimum $R$, the TFPA algorithm is able to save more energy than the others. + +The average speed of the WSN is also obtained, Fig. 10(f) shows that the estimated result is slightly higher than the real average speed with a mean error of 0.67 m/s. Therefore, without any GPS-like devices, the TFPA algorithm estimates properly the dynamic of the WSN which can be employed on other implementations that also require speed information. + +## 6. CONCLUSION AND FUTURE WORKS + +In this paper, a TFPA HELLO algorithm is proposed for mobile ad hoc networks. The algorithm relies on appropriate theoretical analysis in which the expression of turnover ratio + +²http://researchers.lille.inria.fr/~mitton/mobilitylog.html \ No newline at end of file diff --git a/samples/texts/4157293/page_9.md b/samples/texts/4157293/page_9.md new file mode 100644 index 0000000000000000000000000000000000000000..d79a8f082b78746e17a0d5fb259158ea3b5f54d9 --- /dev/null +++ b/samples/texts/4157293/page_9.md @@ -0,0 +1,31 @@ +Figure 10: Different metrics of the WSN vary with number of nodes + +$r$ is approximated properly. By taking account of the relative change of neighborhoods, the relationship between $r$ and relative speed $S$ is derived which enables the sensor nodes to be aware of the dynamic of their environment without using GPS-like devices and hardware cost is saved. Moreover, the optimal transmission range is deduced from the battery consumption model which makes the adaptation procedure energy efficient. The simulation results demonstrate that the algorithm is able to maintain the neighborhood table accurate with low HELLO frequency (comparable with and even lower than GPS-needed algorithm) and lowest energy consumption. + +There are some open issues, such as the battery consumption modeling, as different models result in different $R_{opt}$ with similar deduce method; the propagation model could be more practical to adapt the transmission power. We would like to further study the consequences of these more realistic assumptions and adapt TFPA protocol consequently. + +## 7. REFERENCES + +[1] B. Clark, C. Colbourn, and D. Johnson. Unit disk graphs. *Discrete Mathematics*, 86(1-3):165–177, 1990. + +[2] E. Fleury and D. Simplot-Ryl. *Réseaux de capteurs*. Hermes Science - Lavoisier, 2009. + +[3] F. Ingelrest, N. Mitton, and D. Simplot-Ryl. A turnover based adaptive hello protocol for mobile ad + +hoc and sensor networks. In *MASCOTS*, pages 9–14, 2007. + +[4] P. Jacquet, P. Mühlethaler, T. Clausen, A. Laouiti, A. Qayyum, and L. Viennot. Optimized link state routing protocol for ad hoc networks, 2001. + +[5] N. Li, J. C. Hou, and L. Sha. Design and analysis of an mst-based topology control algorithm. *IEEE Transactions on Wireless Communications*, 4(3):1195–1206, 2005. + +[6] X. Li, N. Mitton, and D. Simplot-Ryl. Mobility prediction based neighborhood discovery in mobile ad hoc networks. In *Networking* (1), pages 241–253, 2011. + +[7] V. Loscrì, E. Natalizio, and C. Costanzo. Simulations of the impact of controlled mobility for routing protocols. *EURASIP J. Wireless Comm. and Networking*, 2010, 2010. + +[8] J. Moy. *OSPF - Open Shortest Path First*. RFC 1583, March 1994. + +[9] J. Radak and N. Mitton. Transmission range adaptation based energy efficient neighborhood discovery. In *MSWiM12*, 2012. + +[10] J. Radak, N. Mitton, and D. Simplot-Ryl. Using battery level as metric for graph planarization. In *ADHOC-NOW*, pages 58–71, 2011. + +[11] A. Troël. *Prise en compte de la mobilité dans les interactions de proximité entre terminaux à profils hétérogènes*. PhD thesis, Université de Rennes, 2004. \ No newline at end of file diff --git a/samples/texts/4392850/page_1.md b/samples/texts/4392850/page_1.md new file mode 100644 index 0000000000000000000000000000000000000000..d57c9c25da5c495ab51f41ec5ce605978d394f36 --- /dev/null +++ b/samples/texts/4392850/page_1.md @@ -0,0 +1,29 @@ +# A NON-PARTITIONABLE COHEN-MACAULAY SIMPLICIAL COMPLEX + +ART M. DUVAL, BENNET GOECKNER, CAROLINE J. KLIVANS, AND JEREMY L. MARTIN + +**ABSTRACT.** A long-standing conjecture of Stanley states that every Cohen–Macaulay simplicial complex is partitionable. We disprove the conjecture by constructing an explicit counterexample. Due to a result of Herzog, Jahan and Yassemi, our construction also disproves the conjecture that the Stanley depth of a monomial ideal is always at least its depth. + +## 1. INTRODUCTION + +Cohen–Macaulay simplicial complexes are ubiquitous in algebraic and topological combinatorics. They were introduced in 1975 by Stanley in his celebrated proof of the Upper Bound Conjecture for spheres [Sta75b].¹ The theory of Cohen–Macaulay rings has long been of great importance in algebra and algebraic geometry; see, e.g., [Ree57, ZS60, Gro64, Hoc72, Hoc80, BH93]. The connection to combinatorics via what is known as Stanley–Reisner theory was established by Hochster [Hoc72], Reisner [Rei76], and Stanley [Sta75a]; standard references for this subject are [Sta96] and [BH93]. + +The focus of this article is the following conjecture, described by Stanley as “a central combinatorial conjecture on Cohen–Macaulay complexes” [Sta96, p. 85]. It was originally proposed by Stanley [Sta79, p. 149] in 1979 and independently by Garsia [Gar80, Remark 5.2] in 1980 for order complexes of Cohen–Macaulay posets. + +**Conjecture 1.1 (Partitionability Conjecture).** Every Cohen–Macaulay simplicial complex is partitionable. + +We explicitly construct a Cohen–Macaulay complex that is not partitionable, thus disproving the Partitionability Conjecture. In fact, we give a general method for constructing counterexamples and an explicit infinite family of non-partitionable Cohen–Macaulay complexes. We begin by giving some background for the conjecture, which will also be directly relevant in our construction. + +Two basic invariants of a simplicial complex $\Delta$ are its $f$- and $h$-vectors + +$$f(\Delta) = (f_{-1}(\Delta), f_0(\Delta), \dots, f_d(\Delta)), \quad h(\Delta) = (h_0(\Delta), h_1(\Delta), \dots, h_{d+1}(\Delta)),$$ + +*Date:* March 24, 2016. + +*2010 Mathematics Subject Classification.* 05E45, 13F55. + +*Key words and phrases.* simplicial complex, $h$-vector, Cohen–Macaulay, constructibility, partitionability, Stanley depth. + +This work was partially supported by a grant from the Simons Foundation (grant number 315347 to J.L.M.). + +¹See [Sta14] for his engaging and personal account of how the proof came to be. \ No newline at end of file diff --git a/samples/texts/4392850/page_10.md b/samples/texts/4392850/page_10.md new file mode 100644 index 0000000000000000000000000000000000000000..f2fa5e497d7a2d637b59c21fea8efa3ffe0948ac --- /dev/null +++ b/samples/texts/4392850/page_10.md @@ -0,0 +1,27 @@ +3.2. **Stanley depth.** Let $\mathbb{k}$ be a field and $S = \mathbb{k}[x_1, \dots, x_n]$, and let $M$ be a $Z^n$-graded $S$-module. A *Stanley decomposition* $\mathcal{D}$ of $M$ is a vector space decomposition + +$$M = \bigoplus_{i=1}^{r} \mathbb{k}[X_i] \cdot m_i$$ + +where each $X_i$ is a subset of $\{x_1, \dots, x_n\}$ and each $m_i$ is a homogeneous element of $M$. The *Stanley depth* of $M$ is defined as + +$$\mathrm{sdepth} M = \max_{\mathcal{D}} \{\min(|X_1|, \dots, |X_r|)\},$$ + +where $\mathcal{D}$ ranges over all Stanley decompositions of $M$. If $\Phi$ is an (absolute or relative) simplicial complex, then we define its Stanley depth to be the Stanley depth of its associated Stanley-Reisner ring or module. This invariant has received substantial recent attention [PSFTY09, Her13], centering on the Depth Conjecture of Stanley [Sta82, Conjecture 5.1], which we now restate. + +**Conjecture 1.2** (Depth Conjecture). For all $Z^n$-graded $S$-modules $M$, + +$$\mathrm{sdepth} M \geq \mathrm{depth} M.$$ + +Herzog, Jahan and Yassemi [HJY08, Corollary 4.5] proved that if $\Delta$ is a Cohen-Macaulay simplicial complex whose Stanley-Reisner ring [Sta96, §II.1] is $\mathbb{k}[\Delta] := S/I_\Delta$ (so that $\mathrm{depth} \mathbb{k}[\Delta] = \dim \mathbb{k}[\Delta] = \dim \Delta + 1$), then Conjecture 1.2 holds for $\mathbb{k}[\Delta]$ if and only if $\Delta$ is partitionable. Therefore, our construction provides a counterexample to the Depth Conjecture. Katthän has conjectured that the inequality $\mathrm{sdepth} S/I \geq \mathrm{depth} S/I - 1$ holds for every monomial ideal $I$; for a detailed exposition and the evidence for this conjecture, see [Kat16]. + +A smaller counterexample to Conjecture 1.2 is provided by the relative complex $Q'$ in Remark 3.6. The depth of each of $C_3$ and $Q'$ is easily seen to be 4, but the Stanley depth of each of $C_3$ and $Q'$ is 3. The Stanley depth computations were made by Katthän [Kat], using the algorithm developed by Ichim and Zarojanu [IZ14]. + +## 4. OPEN QUESTIONS + +Now that we know that Cohen-Macaulayness and even constructibility are not sufficient to guarantee partitionability, it is natural to ask what other conditions do suffice. Hachimori defined a related but more restricted class of *strongly constructible* complexes and showed that they are always partitionable [Hac00, Corollary 4.7]. Here are two additional possibilities, inspired by what our counterexample $C_3$ is not. First, $C_3$ is not homeomorphic to a ball, because the triangles in $A$ are each contained in three facets. On the other hand, balls are Cohen-Macaulay, motivating the following question: + +**Question 4.1.** Is every simplicial ball partitionable? + +This conjecture is true if we further assume the ball is convexly realizable, by [Sta96, Proposition III.2.8]; see also [KS91]. On the other hand, there exist non-convex simplicial balls in dimensions as small as 3; see, e.g., [Lut04b, Lut08]. + +Garsia [Gar80, Remark 5.2] proposed the Partitionability Conjecture for the special class of order complexes of *Cohen-Macaulay posets* (see also [Bac76, Bac80, BGS82]), which give rise to balanced Cohen-Macaulay simplicial complexes. Recall that a $d$-dimensional simplicial complex is *balanced* if its vertices can be colored with \ No newline at end of file diff --git a/samples/texts/4392850/page_11.md b/samples/texts/4392850/page_11.md new file mode 100644 index 0000000000000000000000000000000000000000..84e86e63b69fc11f8fdd94fd04ad2b957972a80c --- /dev/null +++ b/samples/texts/4392850/page_11.md @@ -0,0 +1,31 @@ +$d+1$ colors so that every facet has one vertex of each color. For instance, if $P$ is a ranked poset, then its order complex is easily seen to be balanced by associating colors with ranks. The complex $\tilde{Q}$ with facets listed in (5) is not balanced (because its 1-skeleton is not 4-colorable), hence neither is $C_3$ or $C_{25}$, nor indeed $C_N$ for any $N$. + +**Question 4.2.** *Is every balanced Cohen-Macaulay simplicial complex partitionable?* + +Although Cohen-Macaulay complexes are not necessarily partitionable, their $h$-vectors are still nice; they are always non-negative and in fact coincide with the $h$-vectors of shellable complexes. Without the Partitionability Conjecture, the question remains: + +**Question 4.3.** *What does the $h$-vector of a Cohen-Macaulay simplicial complex count?* + +One answer is given by [DZ01], where it is shown that every simplicial complex can be decomposed into Boolean trees indexed by iterated Betti numbers; see [DZ01, Corollary 3.5]. The starting point of that paper is a conjecture of Kalai [Kal02, Conjecture 22] that any simplicial complex can be partitioned into intervals in a way related to algebraic shifting. Kalai's conjecture would have implied that simplicial complexes could be decomposed into Boolean intervals. Such a decomposition into intervals, however, would have implied the Partitionability Conjecture. Hence our result provides a counterexample to Kalai's conjecture. Moreover, the decomposition in [DZ01] may be best possible at this level of generality. + +## ACKNOWLEDGEMENTS + +We are grateful to Pedro Felzenszwalb for valuable discussions and generous help with coding and Matlab implementations. We thank Margaret Bayer, Louis Billera, Gil Kalai, and Christos Athanasiadis for helpful discussions and suggestions. The open-source computer algebra system Sage [S+13] and Masahiro Hachimori's online library of simplicial complexes [Hac01] were valuable resources. + +## REFERENCES + +[Bac76] Kenneth Paul Baclawski. *Homology and combinatorics of ordered sets*. PhD thesis, Harvard University, 1976. + +[Bac80] Kenneth Baclawski. Cohen-Macaulay ordered sets. *J. Algebra*, **63**(1):226–258, 1980. + +[Bal77] O. Michael Ball. *Network reliability analysis: algorithms and complexity*. PhD thesis, Cornell University, 1977. + +[BGS82] Anders Björner, Adriano M. Garsia, and Richard P. Stanley. An introduction to Cohen-Macaulay partially ordered sets. In *Ordered sets (Banff, Alta., 1981)*, volume 83 of NATO Adv. Study Inst. Ser. C: Math. Phys. Sci., pages 583–615. Reidel, Dordrecht-Boston, Mass., 1982. + +[BH93] Winfried Bruns and Jürgen Herzog. *Cohen-Macaulay rings*, volume 39 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 1993. + +[Duv96] Art M. Duval. Algebraic shifting and sequentially Cohen-Macaulay simplicial complexes. *Electron. J. Combin.*, **3**(1):Research Paper 21, approx. 14 pp. (electronic), 1996. + +[DZ01] Art M. Duval and Ping Zhang. Iterated homology and decompositions of simplicial complexes. *Israel J. Math.*, **121**:313–331, 2001. + +[Gar80] Adriano M. Garsia. Combinatorial methods in the theory of Cohen-Macaulay rings. *Adv. in Math.*, **38**(3):229–266, 1980. \ No newline at end of file diff --git a/samples/texts/4392850/page_12.md b/samples/texts/4392850/page_12.md new file mode 100644 index 0000000000000000000000000000000000000000..ae87bd53723497e6f4a7c954d83363b6ef793e90 --- /dev/null +++ b/samples/texts/4392850/page_12.md @@ -0,0 +1,49 @@ +[Gro64] Alexandre Grothendieck. Éléments de géométrie algébrique. IV. Étude locale des schémas et des morphismes de schémas. I. *Inst. Hautes Études Sci. Publ. Math.*, (20):259, 1964. + +[Hac00] Masahiro Hachimori. Constructible complexes and recursive division of posets. *Theo-ret. Comput. Sci.*, 235(2):225–237, 2000. Combinatorics and optimization (Okinawa, 1996). + +[Hac01] Masahiro Hachimori. Simplicial Complex Library, 2001. http://infoshako.sk.tsukuba.ac.jp/~hachi/math/library/index_eng.html, accessed 04/12/2015. + +[Hac08] Masahiro Hachimori. Decompositions of two-dimensional simplicial complexes. *Dis-crete Math.*, 308(11):2307–2312, 2008. + +[Hat02] Allen Hatcher. *Algebraic Topology*. Cambridge University Press, Cambridge, 2002. + +[Her13] Jürgen Herzog. A survey on Stanley depth. In *Monomial ideals, computations and applications*, volume 2083 of *Lecture Notes in Math.*, pages 3–45. Springer, Heidelberg, 2013. + +[HJY08] Jürgen Herzog, Ali Soleyman Jahan, and Siamak Yassemi. Stanley decompositions and partitionable simplicial complexes. *J. Algebraic Combin.*, 27(1):113–125, 2008. + +[Hoc72] Melvin Hochster. Rings of invariants of tori, Cohen-Macaulay rings generated by monomials, and polytopes. *Ann. of Math. (2)*, 96:318–337, 1972. + +[Hoc80] Melvin Hochster. Cohen-Macaulay rings and modules. In *Proceedings of the International Congress of Mathematicians (Helsinki, 1978)*, pages 291–298. Acad. Sci. Fennica, Helsinki, 1980. + +[IZ14] Bogdan Ichim and Andrei Zarojanu. An algorithm for computing the multigraded Hilbert depth of a module. *Exp. Math.*, 23(3):322–331, 2014. + +[Kal02] Gil Kalai. Algebraic shifting. In *Computational commutative algebra and combinatorics (Osaka, 1999)*, volume 33 of *Adv. Stud. Pure Math.*, pages 121–163. Math. Soc. Japan, Tokyo, 2002. + +[Kat] Lukas Katthän. Personal communication, April 28, 2015. + +[Kat16] Lukas Katthän. Betti posets and the Stanley depth. Preprint, arXiv:1509.08275v2, 2016. + +[KS91] Peter Kleinschmidt and Zeev Smilansky. New results for simplicial spherical polytopes. In *Discrete and computational geometry (New Brunswick, NJ, 1989/1990)*, volume 6 of *DIMACS Ser. Discrete Math. Theoret. Comput. Sci.*, pages 187–197. Amer. Math. Soc., Providence, RI, 1991. + +[Lut04a] Frank H. Lutz. Small examples of nonconstructible simplicial balls and spheres. *SIAM J. Discrete Math.*, 18(1):103–109 (electronic), 2004. + +[Lut04b] Frank H. Lutz. A vertex-minimal non-shellable simplicial 3-ball with 9 vertices and 18 facets. *Electronic Geometry Models*, (2003.05.004), 2004. http://www.eg-models.de/models/Simplicial_Manifolds/2003.05.004/_preview.html, accessed 04/12/2015. + +[Lut08] Frank H. Lutz. Combinatorial 3-manifolds with 10 vertices. *Beiträge Algebra Geom.*, 49(1):97–106, 2008. + +[Mun84] James R. Munkres. Topological results in combinatorics. *Michigan Math. J.*, 31(1):113–128, 1984. + +[Pro77] J. Scott Provan. *Decompositions, shellings, and diameters of simplicial complexes and convex polyhedra*. PhD thesis, Cornell University, 1977. + +[PSFTY09] Mohammad R. Pournaki, Seyed A. Seyed Fakhari, Massoud Tousi, and Siamak Yassemi. What is ... Stanley depth? *Notices Amer. Math. Soc.*, 56(9):1106–1108, 2009. + +[Ree57] David Rees. The grade of an ideal or module. *Proc. Cambridge Philos. Soc.*, 53:28–42, 1957. + +[Rei76] Gerald Allen Reisner. Cohen-Macaulay quotients of polynomial rings. *Advances in Math.*, 21(1):30–49, 1976. + +[Rud58] Mary Ellen Rudin. An unshellable triangulation of a tetrahedron. *Bull. Amer. Math. Soc.*, 64:90–91, 1958. + +[S+13] W.A. Stein et al. *Sage Mathematics Software (Version 6.0)*. The Sage Development Team, 2013. http://www.sagemath.org. + +[Sta75a] Richard P. Stanley. Cohen-Macaulay rings and constructible polytopes. *Bull. Amer. Math. Soc.*, 81:133–135, 1975. \ No newline at end of file diff --git a/samples/texts/4392850/page_13.md b/samples/texts/4392850/page_13.md new file mode 100644 index 0000000000000000000000000000000000000000..beaf4ed27ee237a214cff23744ac19faf1919db8 --- /dev/null +++ b/samples/texts/4392850/page_13.md @@ -0,0 +1,29 @@ +[Sta75b] Richard P. Stanley. The upper bound conjecture and Cohen-Macaulay rings. *Studies in Appl. Math.*, 54(2):135-142, 1975. + +[Sta77] Richard P. Stanley. Cohen-Macaulay complexes. In *Higher combinatorics (Proc. NATO Advanced Study Inst., Berlin, 1976)*, pages 51-62. NATO Adv. Study Inst. Ser., Ser. C: Math. and Phys. Sci., 31. Reidel, Dordrecht, 1977. + +[Sta79] Richard P. Stanley. Balanced Cohen-Macaulay complexes. *Trans. Amer. Math. Soc.*, 249(1):139-157, 1979. + +[Sta82] Richard P. Stanley. Linear Diophantine equations and local cohomology. *Invent. Math.*, 68(2):175-193, 1982. + +[Sta87] Richard P. Stanley. Generalized H-vectors, intersection cohomology of toric varieties, and related results. In *Commutative algebra and combinatorics (Kyoto, 1985)*, volume 11 of *Adv. Stud. Pure Math.*, pages 187-213. North-Holland, Amsterdam, 1987. + +[Sta96] Richard P. Stanley. *Combinatorics and commutative algebra*, volume 41 of *Progress in Mathematics*. Birkhäuser Boston, Inc., Boston, MA, second edition, 1996. + +[Sta14] Richard P. Stanley. How the upper bound conjecture was proved. *Ann. Comb.*, 18(3):533-539, 2014. + +[Zie98] Günter M. Ziegler. Shelling polyhedral 3-balls and 4-polytopes. *Discrete Comput. Geom.*, 19(2):159-174, 1998. + +[ZS60] Oscar Zariski and Pierre Samuel. *Commutative algebra. Vol. II*. The University Series in Higher Mathematics. D. Van Nostrand Co., Inc., Princeton, N. J.-Toronto-London-New York, 1960. + +DEPARTMENT OF MATHEMATICAL SCIENCES, UNIVERSITY OF TEXAS AT EL PASO +*E-mail address:* aduval@utep.edu + +DEPARTMENT OF MATHEMATICS, UNIVERSITY OF KANSAS +*E-mail address:* bennet@ku.edu + +DIVISION OF APPLIED MATHEMATICS AND DEPARTMENT OF COMPUTER SCIENCE, BROWN UNIVERSITY +*E-mail address:* klivans@brown.edu + +DEPARTMENT OF MATHEMATICS, UNIVERSITY OF KANSAS +*E-mail address:* jlmartin@ku.edu \ No newline at end of file diff --git a/samples/texts/4392850/page_2.md b/samples/texts/4392850/page_2.md new file mode 100644 index 0000000000000000000000000000000000000000..b1e5a2aa1b99d6b6e639830baf0b89cc32f5aa03 --- /dev/null +++ b/samples/texts/4392850/page_2.md @@ -0,0 +1,15 @@ +where $d = \dim \Delta$. The number $f_i = f_i(\Delta)$ is the number of $i$-dimensional faces (simplices) in $\Delta$. The $h$-vector is more subtle. It carries the same information as the $f$-vector (the two are related by an invertible linear transformation), and arises naturally in algebra: the Hilbert series of the Stanley–Reisner ring of $\Delta$ is $(1-t)^{-d} \sum_j h_j(\Delta) t^j$. (See Section 2 for precise definitions.) It is not at all apparent if the numbers $h_j(\Delta)$ have a combinatorial interpretation; for instance, they need not be positive in general. + +A *partitioning* of a pure simplicial complex $\Delta$ is a decomposition into pairwise-disjoint Boolean intervals whose maximal elements are exactly the facets (maximal faces) of $\Delta$. Partitionability was introduced by Provan [Pro77] and Ball [Bal77] in the context of reliability analysis. For a partitionable complex, the $h$-numbers enumerate the minimum elements of the intervals by size. In particular, shellable complexes are easily seen to be partitionable, and hence their $h$-vectors have this interpretation. The strict inclusions + +$$ \{\text{shellable complexes}\} \subseteq \{\text{constructible complexes}\} \subseteq \{\text{Cohen-Macaulay complexes}\} $$ + +are also well known. For example, the nonshellable balls constructed by Rudin [Rud58] and Ziegler [Zie98] are constructible (see also [Lut04a]), and any triangulation of the dunce hat is Cohen–Macaulay but not constructible [Hac08, §2]. On the other hand, the possible $h$-vectors of Cohen–Macaulay, constructible, and shellable complexes are all the same [Sta77, Theorem 6], suggesting that their entries ought to count something explicit. The Partitionability Conjecture would have provided a combinatorial interpretation of the $h$-vectors of Cohen–Macaulay complexes. + +The idea of our construction is to work with *relative simplicial complexes*. Suppose $Q = (X, A)$ is a relative simplicial complex that is not partitionable, but with $X$ and $A$ Cohen–Macaulay. Theorem 3.1 gives a general method of gluing together sufficiently many copies of $X$ along $A$ to obtain a counterexample to the Partitionability Conjecture, provided that $A$ is an induced subcomplex of $X$. This reduces the problem to finding an appropriate pair $(X, A)$. Our starting point is the nonshellable simplicial 3-ball $Z$ constructed by Ziegler [Zie98], in which we find a suitable subcomplex $A$ and in turn the desired relative complex $Q$ (Theorem 3.3). By refining the construction, we are able to obtain, in Theorem 3.5, a Cohen–Macaulay non-partitionable complex that is much smaller than predicted by Theorem 3.1, with $f$-vector $(1, 16, 71, 98, 42)$ and $h$-vector $(1, 12, 29)$. + +The existence of a Cohen–Macaulay nonpartitionable complex has an important consequence in commutative algebra. For a polynomial ring $S = \kappa[x_1, \dots, x_n]$ and a $\mathbb{Z}^n$-graded $S$-module $M$, many fundamental algebraic invariants of $M$, such as its dimension and multigraded Hilbert series, can be profitably studied using combinatorics. On the other hand, the combinatorial properties of the depth of $M$ are less well understood. In [Sta82], Stanley proposed a purely combinatorial analogue of depth, defined in terms of certain vector space decompositions of $M$. This invariant, now known as the *Stanley depth* and written $sdepth M$, has attracted considerable recent attention (see [PSFTY09] for an accessible introduction to the subject, and [Her13] for a comprehensive survey), centering around the following conjecture of Stanley [Sta82, Conjecture 5.1]: + +**Conjecture 1.2 (Depth Conjecture).** For all $\mathbb{Z}^n$-graded $S$-modules $M$, + +$sdepth M \geq \text{depth } M.$ \ No newline at end of file diff --git a/samples/texts/4392850/page_3.md b/samples/texts/4392850/page_3.md new file mode 100644 index 0000000000000000000000000000000000000000..9534e9ee0b2d584ef3dd1d369a2d158f7d8cf4ad --- /dev/null +++ b/samples/texts/4392850/page_3.md @@ -0,0 +1,27 @@ +Herzog, Jahan, and Yassemi proved [HJY08, Corollary 4.5] that when $I$ is the Stanley-Reisner ideal of a Cohen-Macaulay complex $\Delta$, the inequality sdepth $S/I \ge$ depth $S/I$ is equivalent to the partitionability of $\Delta$. Therefore, our counterexample to the Partitionability Conjecture disproves the Depth Conjecture as well. We exhibit a smaller counterexample to the Depth Conjecture using a relative complex in Remark 3.6; see Section 3.2. + +It was also previously not known whether all constructible complexes were par- +titionable; see, e.g., [Hac00, § 4]. The counterexample we obtain is not only Cohen- +Macaulay, but in fact constructible. Therefore, even constructibility does not imply +partitionability. + +## 2. PRELIMINARIES + +**2.1. Simplicial and relative simplicial complexes.** Throughout the paper, all complexes will be finite. Let $V$ be a finite set. A simplicial complex on $V$ is a collection $\Delta$ of subsets of $V$ such that whenever $\sigma \in \Delta$ and $\tau \subseteq \sigma$, then $\tau \in \Delta$. Equivalently, $\Delta$ is an order ideal in the Boolean poset $2^V$. The symbol $|\Delta|$ denotes the standard geometric realization of $\Delta$. The elements of $\Delta$ are called the *faces* of $\Delta$, and the elements of $V$ are *vertices*. Maximal faces are called *facets*. The dimension of a face $\sigma$ is $\dim \sigma = |\sigma| - 1$, and the dimension of $\Delta$ is $\dim \Delta = \max\{\dim \sigma | \sigma \in \Delta\}$. We often write $\Delta^d$ to indicate that $\dim \Delta = d$. A complex is *pure* if all maximal faces have the same dimension. A subcomplex of $\Delta$ is a simplicial complex $\Gamma$ with $\Gamma \subseteq \Delta$. A subcomplex is an *induced subcomplex* if it is of the form + +$$ \Delta|_W := \{\sigma \in \Delta \mid \sigma \subseteq W\} $$ + +for some $W \subseteq V$. + +In the construction of our counterexample, we will work with the more general class of relative simplicial complexes. A relative complex $\Phi$ on $V$ is a subset of $2^V$ that is convex: if $\rho, \tau \in \Phi$ and $\rho \subseteq \sigma \subseteq \tau$, then $\sigma \in \Phi$. We sometimes refer to simplicial complexes as “absolute” to distinguish them from relative complexes. + +Every relative complex can be expressed as a pair $\Phi = (\Delta, \Gamma) := \Delta \setminus \Gamma$, where $\Delta$ is a simplicial complex and $\Gamma \subseteq \Delta$ is a subcomplex. Topologically, $\Phi$ corresponds to the quotient space $|\Delta|/|\Gamma|$. Note that there are infinitely many possibilities for the pair $\Delta, \Gamma$. The unique minimal expression is obtained by letting $\Delta = \tilde{\Phi}$ be the combinatorial closure of $\Phi$, i.e., the smallest simplicial complex containing $\Phi$ as a subset, and setting $\Gamma = \Delta \setminus \Phi$. Note that in this case $\dim \Gamma < \dim \Delta$, because the maximal faces of $\Delta$ are precisely those of $\Phi$. + +The notation $\tilde{H}_i(\Delta)$ denotes the $i^{th}$ reduced simplicial homology group with coefficients in $\mathbb{Z}$. (The underlying ring does not matter for our purposes.) The simplicial homology groups $\tilde{H}_i(\Phi)$ of a relative complex $\Phi = (\Delta, \Gamma)$ are just the relative homology groups $\tilde{H}_i(\Delta, \Gamma)$ in the usual topological sense (see, e.g., [Hat02]); in particular, the homology groups of $\Delta$, $\Gamma$, and $\Phi$ fit into a long exact sequence. + +The *f-vector* of an (absolute or relative) complex $\Delta^d$ is $f(\Delta) = (f_{-1}, f_0, \dots, f_d)$, +where $f_i = f_i(\Delta)$ is the number of $i$-dimensional faces of $\Delta$. Note that $f_{-1}(\Delta) = 1$ +for every absolute complex other than the void complex $\Delta = \emptyset$. The *h-vector* +$h(\Delta) = (h_0, h_1, \dots, h_{d+1})$ is defined by + +$$ h_k = \sum_{i=0}^{k} (-1)^{k-i} \binom{d+1-i}{k-i} f_{i-1}, \quad 0 \le k \le d+1. $$ \ No newline at end of file diff --git a/samples/texts/4392850/page_4.md b/samples/texts/4392850/page_4.md new file mode 100644 index 0000000000000000000000000000000000000000..5f94f7eaf86f763f39566cad59fea1de973d8a63 --- /dev/null +++ b/samples/texts/4392850/page_4.md @@ -0,0 +1,40 @@ +In particular, the *f-* and *h*-vectors determine each other. + +The link of a face $\sigma \in \Delta$ is defined as + +$$\mathrm{link}_{\Delta}(\sigma) := \{\tau \in \Delta \mid \tau \cap \sigma = \emptyset, \tau \cup \sigma \in \Delta\}.$$ + +Observe that if $\Delta^d$ is pure and $\dim \sigma = k$, then $\dim \mathrm{link}_{\Delta}(\sigma) = d - k - 1$. If $\sigma$ is a facet of $\Delta$ then $\mathrm{link}_{\Delta}(\sigma) = \{\emptyset\}$, the trivial complex with only the empty face, and if $\sigma \notin \Delta$ then we set $\mathrm{link}_{\Delta}(\sigma)$ to be the void complex with no faces. + +If $\Phi = (\Delta, \Gamma)$ is a relative complex and $\sigma \in \Delta$, we can define the relative link by + +$$\mathrm{link}_{\Phi}(\sigma) = (\mathrm{link}_{\Delta}(\sigma), \mathrm{link}_{\Gamma}(\sigma)).$$ + +It is easy to check that this construction is intrinsic to $\Phi$, i.e., it does not depend on the choice of the pair $\Delta, \Gamma$. Note that $\mathrm{link}_{\Phi}(\sigma)$ is not necessarily a subset of $\Phi$. + +**2.2. Cohen–Macaulay simplicial complexes.** A ring is *Cohen–Macaulay* if its depth equals its (Krull) dimension. Reisner’s criterion [Rei76, Theorem 1] states that Cohen–Macaulayness of the Stanley–Reisner ring [Sta96, §II.1] of a simplicial complex can be expressed in terms of simplicial homology, and we will take this criterion as our definition. The relative version of Reisner’s criterion is Theorem 5.3 of [Sta87]. + +**Theorem 2.1.** [Rei76, Sta87] A simplicial complex $\Delta$ is Cohen–Macaulay if for every face $\sigma \in \Delta$, + +$$\tilde{H}_i(\mathrm{link}_\Delta(\sigma)) = 0 \quad \text{for } i < \dim \mathrm{link}_\Delta(\sigma). \qquad (1)$$ + +Similarly, a relative complex $\Phi = (\Delta, \Gamma)$ is Cohen–Macaulay if for every $\sigma \in \Delta$, + +$$\tilde{H}_i(\mathrm{link}_{\Delta}(\sigma), \mathrm{link}_{\Gamma}(\sigma)) = 0 \quad \text{for } i < \dim \mathrm{link}_{\Delta}(\sigma).$$ + +In fact, Cohen–Macaulayness is a topological invariant: it depends only on the homeomorphism type of the geometric realization $|\Delta|$. This was proved by Munkres [Mun84]. Topological invariance holds for relative complexes as well [Sta96, Corollary III.7.3]. Importantly, if $|\Delta|$ is homeomorphic to a ball or to a sphere, then $\Delta$ is Cohen–Macaulay [Mun84, §2]. + +The following technical lemma will be central to our construction. + +**Lemma 2.2.** Let $\Delta_1$ and $\Delta_2$ be $d$-dimensional Cohen–Macaulay simplicial complexes on disjoint vertex sets. Let $\Gamma$ be a Cohen–Macaulay simplicial complex of dimension $d$ or $d-1$, and suppose that each $\Delta_i$ contains a copy of $\Gamma$ as an induced subcomplex. Then the complex $\Omega$ obtained by identifying the two copies of $\Gamma$ (or “gluing together $\Delta_1$ and $\Delta_2$ along $\Gamma$”) is Cohen–Macaulay. + +*Proof.* It is clear that $\Omega$ is a CW-complex. The requirement that each copy of $\Gamma$ is an *induced* subcomplex of $\Delta_i$ means that $\Omega$ is in fact a simplicial complex (because faces with the same underlying vertex set will be identified). It remains to show that $\Omega$ is Cohen–Macaulay. Henceforth, to simplify the notation, we will identify $\Gamma$ with $\Delta_1 \cap \Delta_2$, so that $\Omega$ is identified with $\Delta_1 \cup \Delta_2$. + +Let $\sigma$ be a face of $\Omega$. Note that + +$$\begin{align} +\mathrm{link}_{\Omega}(\sigma) &= \{\tau \in \Omega \mid \tau \cap \sigma = \emptyset, \tau \cup \sigma \in \Omega\} = \mathrm{link}_{\Delta_1}(\sigma) \cup \mathrm{link}_{\Delta_2}(\sigma), \tag{2} \\ +\mathrm{link}_{\Gamma}(\sigma) &= \{\tau \in \Gamma \mid \tau \cap \sigma = \emptyset, \tau \cup \sigma \in \Gamma\} = \mathrm{link}_{\Delta_1}(\sigma) \cap \mathrm{link}_{\Delta_2}(\sigma). +\end{align}$$ + +First, suppose that $\sigma \in \Delta_1 \setminus \Delta_2$. Then Reisner’s criterion (1) holds for $\sigma$ because $\mathrm{link}_{\Omega}(\sigma) = \mathrm{link}_{\Delta_1}(\sigma)$, and $\Delta_1$ is Cohen–Macaulay. Likewise, Reisner’s criterion holds for faces of $\Delta_2 \setminus \Delta_1$. \ No newline at end of file diff --git a/samples/texts/4392850/page_5.md b/samples/texts/4392850/page_5.md new file mode 100644 index 0000000000000000000000000000000000000000..db80f436d438dcbbb3fe336a0c6f589a8d35f0bf --- /dev/null +++ b/samples/texts/4392850/page_5.md @@ -0,0 +1,35 @@ +On the other hand, suppose that $\sigma \in \Gamma$. Then the observations (2) give rise to a reduced Mayer-Vietoris sequence + +$$ \dots \rightarrow \tilde{H}_i(\text{link}_{\Delta_1}(\sigma)) \oplus \tilde{H}_i(\text{link}_{\Delta_2}(\sigma)) \rightarrow \tilde{H}_i(\text{link}_{\Omega}(\sigma)) \rightarrow \tilde{H}_{i-1}(\text{link}_{\Gamma}(\sigma)) \rightarrow \dots $$ + +But since $\Delta_1, \Delta_2, \Gamma$ are Cohen-Macaulay and $\dim \Gamma \ge \dim \Delta_1 - 1$, the Mayer-Vietoris sequence implies that $\tilde{H}_i(\text{link}_{\Omega}(\sigma)) = 0$ for all $i < d - \dim \sigma - 1$. This is precisely the statement that Reisner's criterion holds for $\sigma$. $\square$ + +Iterating Lemma 2.2, we obtain immediately: + +**Proposition 2.3.** Let $\Delta_1, \dots, \Delta_n$ be $d$-dimensional Cohen-Macaulay simplicial complexes on disjoint vertex sets. Let $\Gamma$ be a Cohen-Macaulay simplicial complex of dimension $d-1$ or $d$, and suppose that each $\Delta_i$ contains a copy of $\Gamma$ as an induced subcomplex. Then the complex $\Omega$ obtained from $\Delta_1, \dots, \Delta_n$ by identifying the $n$ copies of $\Gamma$ is Cohen-Macaulay. + +**2.3. Shellability, partitionability, and constructibility.** + +**Definition 2.4.** Let $\Delta$ be a pure simplicial complex. A *shelling* of $\Delta$ is a total ordering $F_1, \dots, F_n$ of its facets so that for every $j$, the set + +$$ \{\sigma \subseteq F_j \mid \sigma \not\subseteq F_i \text{ for all } i < j\} $$ + +has a unique minimal element $R_j$. + +The *h*-vector of a shellable complex has a simple combinatorial interpretation: + +$$ h_k(\Delta) = \#\{j \mid \#R_j = k\}. \qquad (3) $$ + +In particular $h_k(\Delta) \ge 0$ for all $k$, and in fact $h_k(\Delta) = 0$ implies $h_\ell(\Delta) = 0$ for all $\ell > k$ (a consequence of [BH93, Theorem 5.1.15]). Shellable complexes are Cohen-Macaulay, although the converse is not true: well-known counterexamples include any triangulation of the dunce hat, as well as the nonshellable balls constructed by Rudin [Rud58] and Ziegler [Zie98]. On the other hand, Cohen-Macaulay complexes satisfy the same conditions on the *h*-vector, so it is natural to look for a combinatorial interpretation of their *h*-vectors. + +**Definition 2.5.** Let $\Delta$ be a pure simplicial complex with facets $F_1, \dots, F_n$. A *partitioning* $\mathcal{P}$ of $\Delta$ is a decomposition into pairwise-disjoint Boolean intervals + +$$ \Delta = \bigsqcup_{i=1}^{n} [R_i, F_i] $$ + +where $[R_i, F_i] = \{\sigma \in \Delta \mid R_i \subseteq \sigma \subseteq F_i\}$. We say that each $F_i$ is *matched* to the corresponding $R_i$. + +Clearly, shellable complexes are partitionable. If $\Delta$ is partitionable, then its *h*-vector automatically carries the combinatorial interpretation (3) [Sta96, Proposition III.2.3]. Moreover, Definitions 2.4 and 2.5, and the interpretation of the *h*-vector, carry over precisely from absolute to relative complexes. + +**Example 2.6.** A partitionable complex need not be Cohen-Macaulay, much less shellable. The following example is due to Björner [Sta96, p. 85]. Let $\Delta$ be the pure 2-dimensional complex with facets 123, 124, 134, 234, 156 (abbreviating $\{1, 2, 3\}$ by 123, etc.). This complex is not Cohen-Macaulay because vertex 1 fails Reisner's criterion (1), but it is partitionable: + +$$ \Delta = [\emptyset, 156] \cup [2, 123] \cup [3, 134] \cup [4, 124] \cup [234, 234]. $$ \ No newline at end of file diff --git a/samples/texts/4392850/page_6.md b/samples/texts/4392850/page_6.md new file mode 100644 index 0000000000000000000000000000000000000000..2de79adfdcd561876e1a3bf0cbf27282345973a2 --- /dev/null +++ b/samples/texts/4392850/page_6.md @@ -0,0 +1,33 @@ +In particular, $h(\Delta) = (1, 3, 0, 1)$, which is not the $h$-vector of any Cohen-Macaulay complex (since $h_2 = 0$ and $h_3 > 0$). + +Constructibility, introduced by Hochster [Hoc72], is a combinatorial condition intermediate between shellability and Cohen-Macaulayness. + +**Definition 2.7.** A complex $\Delta^d$ is *constructible* if it is a simplex, or if it can be written as $\Delta = \Delta_1 \cup \Delta_2$, where $\Delta_1, \Delta_2$, and $\Delta_1 \cap \Delta_2$ are constructible of dimensions $d, d$, and $d-1$ respectively. + +Hachimori [Hac00] investigated the question of whether constructibility implies partitionability. Our counterexample to the Partitionability Conjecture is in fact constructible, resolving this question as well. + +### 3. THE COUNTEREXAMPLE + +We first give a general construction that reduces the problem of finding a counterexample to the problem of constructing a certain kind of non-partitionable Cohen-Macaulay relative complex. + +**Theorem 3.1.** Let $Q = (X, A)$ be a relative complex such that + +(i) $X$ and $A$ are Cohen-Macaulay; + +(ii) $A$ is an induced subcomplex of $X$ of codimension at most 1; and + +(iii) $Q$ is not partitionable. + +Let $k$ be the total number of faces of $A$, let $N > k$, and let $C = C_N$ be the simplicial complex constructed from $N$ disjoint copies of $X$ identified along the subcomplex $A$. Then $C$ is Cohen-Macaulay and not partitionable. + +*Proof.* First, $C$ is Cohen-Macaulay by Proposition 2.3. Second, suppose that $C$ has a partitioning $\mathcal{P}$. Let $X_1, X_2, \dots, X_N$ be the $N$ copies of $X$. By the pigeonhole principle, since $N > k$, there is some copy of $X$, say $X_N$, none of whose facets is matched to a face in $A$. Let $[R_1, F_1], \dots, [R_\ell, F_\ell]$ be the intervals in $\mathcal{P}$ for which $F_i \in X_N$; then + +$$ \bigcup_{i=1}^\ell [R_i, F_i] \subseteq X_N \setminus A. \quad (4) $$ + +No other interval in $\mathcal{P}$ can intersect $X_N \setminus A$ nontrivially, so in fact equality must hold in (4). But then (4) is in fact a partitioning of $X_N \setminus A = Q$, which was assumed to be non-partitionable. $\square$ + +**Remark 3.2.** It is easy to see that a subcomplex $A \subset X$ is an induced subcomplex if and only if every minimal face of $X \setminus A$ has dimension 0. Therefore, this condition may be viewed as a restriction on the relative complex $(X, A)$. + +**3.1. The construction.** Throughout, we abbreviate the simplex on vertices $\{v_1, \dots, v_k\}$ by $v_1 \cdots v_k$. Our construction begins with Ziegler’s nonshellable 3-ball $Z$, which is a nonshellable triangulation of the 3-ball with 10 vertices labeled 0, 1, ..., 9, and the following 21 facets [Zie98, §4]: + +
0123012502370256026712341249
1256126913471457145814891569
1589234823672368347836784578
\ No newline at end of file diff --git a/samples/texts/4392850/page_7.md b/samples/texts/4392850/page_7.md new file mode 100644 index 0000000000000000000000000000000000000000..27ec0c1057bcc035b25ea86719c548df840e4316 --- /dev/null +++ b/samples/texts/4392850/page_7.md @@ -0,0 +1,27 @@ +FIGURE 1. A perspective view of the 1-skeleton of $\bar{Q}$ from the top. Dark edges are exterior edges visible from the top; light edges are interior edges, or exterior edges at the bottom. Light vertices are in A. + +The complex $Z$ is not shellable, but it is partitionable, Cohen–Macaulay and, in fact, constructible [Hac01]. + +Let $B$ be the induced subcomplex $Z|_{\{0,2,3,4,6,7,8\}}$. That is, $B$ is the pure 3-dimensional complex with facets + +$$0237, 0267, 2367, 2368, 2348, 3678, 3478.$$ + +The given order is a shelling of $B$; in particular $B$ is Cohen–Macaulay. Define $Q$ to be the relative complex $Q = (Z, B)$. Then $Q$ is also Cohen–Macaulay by [Duv96, Corollary 3.2]. + +The facets of $Q$ are + +$$ \begin{matrix} 1249, 1269, 1569, 1589, 1489, 1458, 1457, \\ 4578, 1256, 0125, 0256, 0123, 1234, 1347. \end{matrix} \quad (5) $$ + +The minimal faces of $Q$ are just the vertices 1, 5, 9. We can picture $Q$ easily by considering its combinatorial closure $\bar{Q}$, that is, the 3-dimensional simplicial complex generated by the facets (5). In fact $\bar{Q}$ is a shellable ball; the ordering of facets given in (5) is a shelling. The complement $A = \bar{Q} \setminus Q = \bar{Q}|_{\{0,2,3,4,6,7,8\}}$ is the shellable 2-ball with facets + +$$026, 023, 234, 347, 478. \qquad (6)$$ + +Thus $Q = (\bar{Q}, A)$. The $f$- and $h$-vectors of these complexes are + +$$ f(\bar{Q}) = (1, 10, 31, 36, 14), \qquad h(\bar{Q}) = (1, 6, 7, 0, 0), $$ + +$$ f(A) = (1, 7, 11, 5, 0), \qquad h(A) = (1, 4, 0, 0, 0), $$ + +$$ f(Q) = (0, 3, 20, 31, 14), \qquad h(Q) = (0, 3, 11, 0, 0). $$ + +The 1-skeleton of $\bar{Q}$ is shown in Figure 1. The triangles on the boundary of $Q$, i.e., those contained in exactly one facet, are illustrated in Figure 2, which shows \ No newline at end of file diff --git a/samples/texts/4392850/page_8.md b/samples/texts/4392850/page_8.md new file mode 100644 index 0000000000000000000000000000000000000000..2a8e6565c065e53a932f3e29724f31a4ab8ae787 --- /dev/null +++ b/samples/texts/4392850/page_8.md @@ -0,0 +1,27 @@ +FIGURE 2. Left: A front view of $Q$. Right: A back view of $Q$. The shaded and dashed faces are in $A$. + +the boundary of $Q$ as seen from the front (left) and back (right). The five shaded triangles are the facets of $A$, and hence are missing from $Q$. + +In what follows, we will use the fact that the triple transposition $\tau = (0\ 7)(2\ 4)(6\ 8)$ is a simplicial automorphism of $\bar{Q}$. This symmetry is apparent as a reflection through the plane containing vertices 1, 3, 5, and 9 in Figure 1, and as a vertical reflection in each part of Figure 2. + +**Theorem 3.3.** *The relative complex $Q$ is not partitionable.* + +*Proof.* Suppose that $Q$ admits a partitioning $\mathcal{P}$. We will show that a particular minimal face, namely vertex 5, must simultaneously belong to two intervals of the partitioning, which is a contradiction. + +For each facet $F \in Q$, denote by $I_F = [R_F, F]$ the interval of $\mathcal{P}$ with top element $F$. + +For each triangle $T$ on the boundary, there is only one interval that can contain $T$. In particular, $489 \in I_{1489}$. It follows that $148 \notin I_{1489}$, for otherwise $148 \cap 489 = 48 \in I_{1489}$, but $48 \notin Q$. Therefore $148 \in I_{1458}$, since $1458$ is the only other facet containing $148$. Then $458 \notin I_{1458}$, again because $148 \cap 458 = 48 \notin Q$, and thus $45 \notin I_{1458}$. The other two facets that contain $45$ are $4578$ and $1457$. Therefore, either $45 \in I_{4578}$ or $45 \in I_{1457}$. On the other hand, these are also the only two facets that contain the edge 57. Since + +$$45, 57 \subset 457 \subset 1457, 4578$$ + +the edges 45 and 57 must belong to the same interval of $\mathcal{P}$ (namely, whichever one of $I_{1457}, I_{4578}$ contains 457). But then that interval must also contain $45 \cap 57 = 5$. We have shown that + +either $5 \in I_{1457}$ or $5 \in I_{4578}$. \hfill (7) + +By applying the automorphism $\tau$ to the above argument, we conclude that + +either $5 \in I_{0125}$ or $5 \in I_{0256}$. \hfill (8) + +But (7) and (8) cannot both be true, and we have reached a contradiction. $\square$ + +We can now give an explicit description of our counterexample to the Partitionability Conjecture. Since $X = \bar{Q}$ and $A$ are both shellable balls, they are \ No newline at end of file diff --git a/samples/texts/4392850/page_9.md b/samples/texts/4392850/page_9.md new file mode 100644 index 0000000000000000000000000000000000000000..6c7c19f08d2258631f9c2bf1a39da22da3d5673c --- /dev/null +++ b/samples/texts/4392850/page_9.md @@ -0,0 +1,27 @@ +Cohen-Macaulay. We may therefore apply Theorem 3.1, with $N = 25$ (since A has 24 faces total). + +**Theorem 3.4.** Let $X = \bar{Q}$ be the combinatorial closure of $Q$, and let $A = X \setminus Q$. That is, $X$ and $A$ are the absolute simplicial complexes whose facets are listed in (5) and (6), respectively. Then the simplicial complex $C_{25}$ constructed in Theorem 3.1 is Cohen-Macaulay and non-partitionable. + +The $f$-vector is $f(C_{25}) = f(A) + 25f(Q) = (1, 82, 511, 780, 350)$. + +For this particular construction, the full power of Theorem 3.1 is not necessary; there is a much smaller counterexample. + +**Theorem 3.5.** Let $Q$, $A$, and $X = \bar{Q}$ be as described above. Then the simplicial complex $C_3$ obtained by gluing together three copies of $X$ along $A$ is Cohen-Macaulay and non-partitionable. + +*Proof.* Suppose that $C_3$ is partitionable. By the pigeonhole principle, at least one of the three copies of $Q$ inside $C_3$ has no facets matched to either edge 48 or its image under $\tau$, edge 26. These two edges are the only two faces of $A$ that occur in the argument of Theorem 3.3. Therefore, that argument applies to this copy of $Q$, and once again we conclude that (7) and (8) must both hold, a contradiction. $\square$ + +The $f$-vector is $f(C_3) = f(A) + 3f(Q) = (1, 16, 71, 98, 42)$. We do not know if there exists a smaller counterexample (for example, the complex $C_2$ obtained by gluing two copies of $X$ together along $A$ is partitionable). In particular, it is still open whether every two-dimensional Cohen-Macaulay simplicial complex is partitionable; see Hachimori [Hac08]. + +We have previously observed that $X$ and $A$ are shellable. We note that $X$ and $A$ are contractible, and it is easily seen that $X$ deformation-retracts onto $A$, so $C_3$ is contractible as well, although it is not homeomorphic to a ball. + +**Remark 3.6.** There is a much smaller *relative* simplicial complex that is Cohen-Macaulay but not partitionable, with $f$-vector $(0, 0, 5, 10, 5)$. This complex can be written as $Q' = (X', A')$, where $X' = \bar{Q}' = Z|_{\{1,4,5,7,8,9\}}$ is the complex with facets + +1589, 1489, 1458, 1457, 4578, + +and $A'$ is the (non-induced) subcomplex of $X'$ with facets + +489, 589, 578, 157. + +These complexes are shellable balls of dimensions 3 and 2 respectively (the given orders of facets are shelling orders), and $A'$ is contained in the boundary of $X'$ (note that each facet in $A'$ is contained in only one facet of $X'$), so $Q'$ is Cohen-Macaulay by [Sta87, Corollary 5.4]. On the other hand, one can check directly that there is no partitioning of $Q'$. Because $A'$ is not an induced subcomplex, it is not possible to obtain a counterexample to the Partitionability Conjecture by applying Theorem 3.1. + +**Remark 3.7.** It is easily seen that $C_3$ is constructible. Therefore, it furnishes a counterexample not only to the Partitionability Conjecture, but also to the conjecture that every constructible simplicial complex is partitionable [Hac00, §4]. Furthermore, since all constructible complexes are Cohen-Macaulay [BH93, p. 219], the constructibility and non-partitionability of $C_3$ are sufficient to disprove the Partitionability Conjecture. \ No newline at end of file diff --git a/samples/texts/5122728/page_93.md b/samples/texts/5122728/page_93.md new file mode 100644 index 0000000000000000000000000000000000000000..6181c97cfe550dbab7a43afb735263164ceaf0d0 --- /dev/null +++ b/samples/texts/5122728/page_93.md @@ -0,0 +1,3 @@ +The variation rates of these two parameters and their bounds are plotted in Fig. 5.10 and Fig. 5.11. As shown in these figures, the derivatives of the air flow and engine speed are in the assumed variation rate region Ω. Otherwise, the stability of switching LPV controllers obtained based on these variation rate bounds would not necessarily be guaranteed. + +Figure 5.10: Air-flow variation rate trajectory and the lower and upper bounds on its variations. \ No newline at end of file diff --git a/samples/texts/5841221/page_1.md b/samples/texts/5841221/page_1.md new file mode 100644 index 0000000000000000000000000000000000000000..189edc6f7de3894a1587412175e2399506ddaa33 --- /dev/null +++ b/samples/texts/5841221/page_1.md @@ -0,0 +1,29 @@ +# An Intuitionistic Axiomatization of ‘Eventually’ + +Martín Diéguez¹ + +Lab-STICC +CERV, Ecole Nationale d'Ingénieurs de Brest +Brest, France + +David Fernández-Duque + +Department of Mathematics +Ghent University +Ghent, Belgium + +## Abstract + +The language of linear temporal logic can be interpreted over a class of structures called *expanding posets*. This gives rise to the intuitionistic temporal logic ITLe, recently shown to be decidable by Boudou and the authors. In this article we completely axiomatize the ‘henceforth’-free fragment of this logic. + +*Keywords:* axiomatization, intuitionistic modal logic, completeness, temporal logic. + +## 1 Introduction + +Intuitionistic logic is the basis for constructive reasoning and temporal logics are an important tool for reasoning about dynamic processes. One would expect that a combination of the two would yield a powerful framework in which to model phenomena involving both computation and time, an idea explored by Davies [6] and Maier [25]. This is not the only potential application of such a logic: in view of the topological interpretation of the intuitionistic implication, one may instead use it to model *space* and time [15]. This makes it important to study these logics, which in particular did not previously enjoy a complete axiomatization in the presence of ‘infinitary’ tenses. Our goal in this paper is to present such an axiomatization for ‘next’ and ‘eventually’. + +### 1.1 State-of-the-art + +There are several (poly)modal logics which may be used to model time, and some have already been studied in an intuitionistic setting, e.g. tense logics by Davoren [7] and propositional dynamic logic with iteration by Nishimura [27]. Here we are specifically concerned with intuitionistic analogues of discrete-time + +¹ Martín Diéguez is funded by the ANR-12-ASTR-0020 Project STRATEGIC. \ No newline at end of file diff --git a/samples/texts/5841221/page_10.md b/samples/texts/5841221/page_10.md new file mode 100644 index 0000000000000000000000000000000000000000..c19a8f1c7bd8fe3115d220f073dd371204b1ffc0 --- /dev/null +++ b/samples/texts/5841221/page_10.md @@ -0,0 +1,31 @@ +in other words, $\mathcal{M}_c$ is the set of prime types with the usual ordering and successor relations. Note that $\ell_c$ is just the identity (i.e., $\ell_c(\Phi) = \Phi$). We will usually omit writing $\ell_c$, as it has no effect on its argument. + +Next we show that $\mathcal{M}_c$ is a full, weak, deterministic quasimodel. For this, we must prove that it has all properties required by Definition 3.4. + +**Lemma 4.4** *$\mathcal{M}_c$ is a labelled frame.* + +**Proof.** We know that $\leq_T$ is a partial order and restrictions of partial orders are partial orders, so $\leq_c$ is a partial order. Moreover, $\ell_c$ is the identity, so $\Phi \leq_c \Psi$ implies $\ell_c(\Phi) \leq_T \ell_c(\Psi)$. + +Now let $\Phi \in |\mathcal{M}_c|$ and assume that $\varphi \to \psi \in \Phi^-$. Note that $\Phi^+$, $\varphi \nmid \psi$, for otherwise by intuitionistic reasoning we would have $\Phi^+ \vdash \varphi \to \psi$, which is impossible if $\Phi$ is a prime type. By Lemma 4.3, there is a prime type $\Psi$ with $\Phi^+ \cup \{\varphi\} \subseteq \Psi^+$ and $\psi \in \Psi^-$. It follows that $\Phi \leq_c \Psi$, $\varphi \in \Psi^+$ and $\psi \in \Psi^-$, as needed. $\square$ + +**Lemma 4.5** *$S_c$ is a forward-confluent function.* + +**Proof.** For a set $\Gamma \subseteq \mathcal{L}_{\diamond}$, recall that we have defined $\mathring{\Gamma} = \{\mathring{\circ}\varphi : \varphi \in \Gamma\}$. It will be convenient to introduce the notation + +$$\Theta\Gamma = \{\varphi : \mathring{\circ}\varphi \in \Gamma\}.$$ + +With this, we show that $S_c$ is functional and forward-confluent. + +**FUNCTIONALITY.** We claim that for all $\Phi, \Psi \in |\mathcal{M}_c|,$ + +$$\Phi S_c \Psi \text{ if and only if } \Psi = (\Theta\Phi^{-}, \Theta\Phi^{+}). \quad (1)$$ + +We must check that $\Psi \in |\mathcal{M}_c|$. To see that $\Psi$ is full, let $\varphi \in \mathcal{L}_{\diamond}$ be so that $\varphi \notin \Psi^-$. It follows that $\mathring{\circ}\varphi \notin \Phi^-$, but $\Phi$ is full, so $\mathring{\circ}\varphi \in \Phi^+$ and thus $\varphi \in \Psi^+$. Since $\varphi$ was arbitrary, $\Psi^- \cup \Psi^+ = \mathcal{L}_{\diamond}$. + +Next we check that $\Psi$ is consistent. If not, let $\Gamma \subseteq \Psi^+$ and $\Delta \subseteq \Psi^-$ be finite and such that $\bigwedge \Gamma \to \bigvee \Delta$ is derivable. Using (R2) and (A5) we see that $\mathring{\circ}\bigwedge \Gamma \to \mathring{\circ}\bigvee \Delta$ is derivable, which in view of Lemma 2.2 implies that $\bigwedge \mathring{\circ}\Gamma \to \bigvee \mathring{\circ}\Delta$ is derivable as well. But $\mathring{\circ}\Gamma \subseteq \Phi^+$ and $\mathring{\circ}\Delta \subseteq \Phi^-$, contradicting the fact that $\Phi$ is consistent. + +Thus $\Psi \in |\mathcal{M}_c|$, and $\Phi S_c \Psi$ holds provided that $\Phi S_T \Psi$. It is clear that clauses (a) and (b) of Definition 3.3 hold. If $\diamond\varphi \in \Phi^+$ and $\varphi \notin \Phi^+$, it follows that $\varphi \in \Phi^-$. By Lemma 2.2 $\diamond\varphi \to \varphi \lor \mathring{\circ}\diamond\varphi$ is derivable, so we cannot have that $\mathring{\circ}\diamond\varphi \in \Phi^-$ and hence $\mathring{\circ}\diamond\varphi \in \Phi^+$, so that $\diamond\varphi \in \Psi^+$. Similarly, if $\diamond\varphi \in \Phi^-$ we have that $\mathring{\circ}\diamond\varphi \in \Phi^-$, for otherwise we obtain a contradiction from (A6). Therefore, $\diamond\varphi \in \Psi^-$ as well. + +To check that $\Psi$ is unique, suppose that $\Theta \in |\mathcal{M}_c|$ is such that $\Phi S_c \Theta$. Then if $\varphi \in \Psi^+$ it follows from (1) that $\mathring{\circ}\varphi \in \Phi^+$ and hence $\varphi \in \Theta^+$; by the same argument, if $\varphi \in \Psi^-$ it follows that $\varphi \in \Theta^-$, and hence $\Theta = \Psi$. + +**FORWARD CONFLUENCE:** Now that we have shown that $S_c$ is a function, we may treat it as such. Suppose that $\Phi \leq_c \Psi$; we must check that $S_c(\Phi) \leq_c S_c(\Psi)$. \ No newline at end of file diff --git a/samples/texts/5841221/page_11.md b/samples/texts/5841221/page_11.md new file mode 100644 index 0000000000000000000000000000000000000000..4773428b08817e378638a816ca6d3bcf7162c97e --- /dev/null +++ b/samples/texts/5841221/page_11.md @@ -0,0 +1,32 @@ +Fig. 2. If $E \subseteq |\mathcal{X}| \times |\mathcal{Y}|$ is a dynamical simulation, this diagram can always be completed. + +Let $\varphi \in S_c^+(\Phi)$. Using (1), we have that $\mathcal{O}\varphi \in \Phi^+$, hence $\mathcal{O}\varphi \in \Psi^+$ and thus +$\varphi \in S_c(\Psi^+)$. Since $\varphi \in S_c(\Phi)$ was arbitrary we obtain $S_c^+(\Phi) \ll_c S_c^+(\Psi)$, as +needed. $\square$ + +From all these lemmas we conclude the following result. + +**Proposition 4.6** *The canonical model is a deterministic weak quasimodel.* + +**Proof.** In view of Definition 3.4, we need (i) ($|\mathcal{M}_c|, \ell_c$) to be a labelled frame, (ii) $S_c$ to be a sensible forward-confluent function, and (iii) $\ell_c$ to have $T_{\mathcal{L}_{\diamond}}$ as its codomain. The first item is Lemma 4.4. That $S_c$ is a forward-confluent function is Lemma 4.5, and it is sensible since $\Phi S_c \Psi$ precisely when $\Phi S_T \Psi$. Finally, if $\Phi \in |\mathcal{M}_c|$ then $\ell_c(\Phi) = \Phi$, which is an element of $T_{\mathcal{L}_{\diamond}}$ by Lemma 4.2. $\square$ + +# 5 Simulations + +Simulations are relations between worlds in labelled spaces, and give rise to +the appropriate notion of 'substructure' for modal and intuitionistic logics. We +have used them to prove that a topological intuitionistic temporal logic has the +finite quasimodel property [15], and they will also be useful for our completeness +proof. Below, recall that $\Phi \subseteq_T \Psi$ means that $\Phi^- \subseteq \Psi^-$ and $\Phi^+ \subseteq \Psi^+$. + +**Definition 5.1** Let $\Sigma \subseteq \Delta \subseteq \mathcal{L}_{\diamond}$ be closed under subformulas, $\mathcal{X}$ be a $\Sigma$-labelled frame and $\mathcal{Y}$ be $\Delta$-labelled. A forward-confluent relation $E \subseteq |\mathcal{X}| \times |\mathcal{Y}|$ is a simulation if, whenever $x E y$, $\ell_{\mathcal{X}}(x) \subseteq_T \ell_{\mathcal{Y}}(y)$. If there exists a simulation $E$ such that $x E y$, we write $(\mathcal{X}, x) \models (\mathcal{Y}, y)$. + +The relation *E* is a dynamic simulation between *X* and *Y* if *S**Y**E* ⊆ *ES**X*. + +The following is proven in [15]. While the details of the construction given +there are not important for our current purposes, the interested reader may +find an overview in Appendix B. Below, recall that $\Sigma \in \mathcal{L}_{\diamond}$ means that $\Sigma$ is +finite and closed under subformulas. + +**Theorem 5.2** Given $\Sigma \in \mathcal{L}_{\diamond}$, there exists a finite weak quasimodel $\mathcal{I}_{\Sigma}$ such that if $A$ is any deterministic weak quasimodel then $\models \Sigma \subseteq |A|$ is a surjective dynamic simulation. + +Points of $\mathcal{I}_{\Sigma}$ are called *moments*. One can think of $\mathcal{I}_{\Sigma}$ as a finite initial structure over the category of labelled weak quasimodels. Next, we will inter- \ No newline at end of file diff --git a/samples/texts/5841221/page_12.md b/samples/texts/5841221/page_12.md new file mode 100644 index 0000000000000000000000000000000000000000..6244b639cf64b499d48b4d000b9a8650c9d23d49 --- /dev/null +++ b/samples/texts/5841221/page_12.md @@ -0,0 +1,37 @@ +nalize the notion of simulating elements of $\mathcal{I}_\Sigma$ into the temporal language. This +is achieved by the formulas $Sim(w)$ given by the next proposition. + +**Proposition 5.3** *Given* $\Sigma \in \mathcal{L}_{\diamond}$ and a finite $\Sigma$-labelled frame $W$, there exist formulas $(Sim(w))_{w \in |W|}$ such that for any fully labelled frame $\mathcal{X}$, $w \in |W|$ and $x \in |\mathcal{X}|$, $Sim(w) \in \ell^{-}(x)$ if and only if there is $y \ge x$ such that $(W, w) \models (\mathcal{X}, y)$. + +**Proof.** An explicit construction is given in Appendix A. $\square$ + +The next proposition allows us to emulate model-theoretic reasoning within +$\mathcal{L}_{\diamond}$. + +**Proposition 5.4** *Fix* $\Sigma \in \mathcal{L}_{\diamond}$ and let $\mathcal{I} = \mathcal{I}_{\Sigma}$, $w \in |\mathcal{I}|$ and $\psi \in \Sigma$. + +(i) If $\psi \in \ell^{-}(w)$, then $\vdash \psi \rightarrow Sim(w)$. + +(ii) If $\psi \in \ell^{+}(w)$, then $\vdash (\psi \rightarrow Sim(w)) \rightarrow Sim(w)$. + +(iii) If $w \preceq v$, then $\vdash Sim(v) \rightarrow Sim(w)$. + +(iv) $\vdash \bigwedge_{\psi \in \ell_{\mathcal{I}}^{-}(w)} Sim(w) \rightarrow \psi$. + +(v) $\vdash \bigcirc_{w \in S_{\mathcal{I}} v} Sim(v) \rightarrow Sim(w)$. + +**Proof.** + +(i) First assume that $\psi \in \ell^{-}(w)$, and toward a contradiction that $\not\vdash \psi \rightarrow Sim(w)$. By the Lindenbaum lemma there is $\Gamma \in |\mathcal{M}_c|$ such that $\psi \rightarrow Sim(w) \in \Gamma^-$. Thus for some $\Theta \geq_c \Gamma$ we have that $\psi \in \Theta^+$ and $Sim(w) \in \Theta^-$. But then by Proposition 5.3 we have that $(W, w) \models (\mathcal{M}_c, \Delta)$ for some $\Delta \geq_c \Theta$, so that $\psi \in \Delta^-$, and by upwards persistence $\psi \in \Theta^-$, contradicting the consistency of $\Theta$. + +(ii) If $\psi \in \ell^+(w)$, we proceed similarly. Assume toward a contradiction that $\not\vdash (\psi \rightarrow Sim(w)) \rightarrow Sim(w)$. Then, reasoning as above there is $\Theta \in |\mathcal{M}_c|$ such that $\psi \rightarrow Sim(w) \in \Theta^+$ and $Sim(w) \in \Theta^-$. From Proposition 5.3 we see that there is $\Delta \geq_c \Theta$ such that $(W, w) \models (\mathcal{M}_c, \Delta)$, so that $\psi \in \Delta^+$ and, once again by Proposition 5.3, $Sim(w) \in \Delta^-$. It follows that $\psi \rightarrow Sim(w) \notin \Delta^+$; but in view of upward persistence, this contradicts that $\psi \rightarrow Sim(w) \in \Theta^+$. + +(iii) Suppose that $v \ge w$. Reasoning as above, it suffices to show that if $\Gamma \in |\mathcal{M}_c|$ is such that $Sim(w) \in \Gamma^-$, then also $Sim(v) \in \Gamma^-$. But if $Sim(w) \in \Gamma^-$, there is $\Theta \geq_c \Gamma$ such that $(\mathcal{I}, w) = (\mathcal{M}_c, \Theta)$. By forward confluence $(\mathcal{I}, v) = (\mathcal{M}_c, \Delta)$ for some $\Delta \geq_c \Theta$. Thus by Proposition 5.3, $Sim(v) \in \Delta^-$ and by upwards persistence $Sim(v) \in \Gamma^-$. Since $\Gamma \in |\mathcal{M}_c|$ was arbitrary, the claim follows. + +(iv) We prove that if $\Gamma \in |\mathcal{M}_c|$ is such that + +$$ +\bigwedge_{\psi \in \ell^{-}(w)} \mathrm{Sim}(w) \in \Gamma^{+}, \quad (2) +$$ + +then $\psi \in \Gamma^+$. If (2) holds then by Theorem 5.2, there is $w \in |\mathcal{I}|$ with $(\mathcal{I}, w) = \ No newline at end of file diff --git a/samples/texts/5841221/page_13.md b/samples/texts/5841221/page_13.md new file mode 100644 index 0000000000000000000000000000000000000000..bc50c8723e7a504906156d665777e2802e3ec4fc --- /dev/null +++ b/samples/texts/5841221/page_13.md @@ -0,0 +1,42 @@ +($\mathcal{M}_c, \Gamma$). By Proposition 5.3, $\text{Sim}(w) \in \Gamma^-$, hence it follows from (2) that $\psi \notin \ell^-(w)$; but $w$ is $\Sigma$-typed and $\psi \in \Sigma$, so $\psi \in \ell^+(w)$ and thus $\psi \in \Gamma^+$, as required. + +(v) Suppose that $\Gamma \in |\mathcal{M}_c|$ is such that + +$$ +\bigcirc_{w \in S_I v} \mathrm{Sim}(v) \in \Gamma^+, \qquad (3) +$$ + +and assume toward a contradiction that $\text{Sim}(w) \in \Gamma^-$. By Proposition 5.3 ($\mathcal{I}, w) \Rightarrow (\mathcal{M}_c, \Delta)$ for some $\Delta \geqslant_c \Gamma$. Since $\Rightarrow$ is a dynamic simulation, it follows that there is $v \in |\mathcal{I}|$ with $w S_\mathcal{I} v$ and $(\mathcal{I}, v) \Rightarrow (\mathcal{M}_c, S_c(\Delta))$, so that $\text{Sim}(v) \in (S_c(\Delta))^{-}$. It follows that $\mathring{\text{O}}\text{Sim}(v) \in \Gamma^-$, since $S_c$ is sensible and $\Gamma$ is full. But $\Delta \geqslant_c \Gamma$, so that $\mathring{\text{O}}\text{Sim}(v) \in S_c^-(\Gamma)$ as well, contradicting (3). $\square$ + +**6 The initial quasimodel** + +We are now ready to define our initial quasimodel. Given a finite set Σ of formulas, we will define a quasimodel J_Σ falsifying all unprovable Σ-types. This quasimodel is a substructure of I_Σ, containing only moments which are possible in the following sense. + +**Definition 6.1** *Fix* $\Sigma \subseteq \mathcal{L}_{\diamond}$. We say that a moment $w \in |\mathcal{I}_{\Sigma}|$ is possible if $|w \setminus \mathcal{I}_{\Sigma}| = 1$. *Then, we say that* $w$ *is a possible moment for* $\Sigma$ *by definition of* $J_{\Sigma}$. + +With this we are ready to define our initial structure, which as we will see +later is indeed a quasimodel. + +**Definition 6.2** Given $\Sigma \in \mathcal{L}_{\diamond}$, we define the *initial structure* for $\Sigma$ by $J_{\Sigma} = I_{\Sigma} \upharpoonright J_{\Sigma}$. + +Our strategy from here on will be to show that canonical structures are indeed quasimodels; once we establish this, completeness of $\ITL_{\diamond}^0$ is an easy consequence. The most involved step will be showing that the successor relation on $\mathcal{J}_{\Sigma}$ is $\omega$-sensible, but we begin with some simpler properties. + +**Lemma 6.3** Let $\Sigma$ be a finite set of formulas, $I = I_{\Sigma}$ and $J = J_{\Sigma}$. Then, $|\mathcal{J}|$ is an upward-closed subset of $|\mathcal{I}|$ and $S_J$ is serial. + +*Proof.* To check that $|\mathcal{J}|$ is upward closed, let $w \in |\mathcal{J}|$ and suppose $v \ge w$. Now, by Proposition 5.4.iii, we have that + +$$ +\vdash \mathrm{Sim}(v) \rightarrow \mathrm{Sim}(w); +$$ + +hence if $w$ is possible, so is $v$. + +To see that $S_J$ is serial, observe that by Proposition 5.4.v, if $w \in |\mathcal{J}| \subseteq |\mathcal{I}|$, + +$$ +\vdash \bigcirc_{w \in S_I v} \mathrm{Sim}(v) \rightarrow \mathrm{Sim}(w). +$$ + +Since $w$ is possible, it follows that for some $v$ with $w S_I v$, $v$ is possible as +well, for otherwise $\bigwedge_{w S_I v} \mathrm{Sim}(v)$ would be equivalent to $\bigvee$, allowing us to +deduce $\mathrm{Sim}(w)$. But then $v \in |\mathcal{J}|$, as needed. $\square$ \ No newline at end of file diff --git a/samples/texts/5841221/page_14.md b/samples/texts/5841221/page_14.md new file mode 100644 index 0000000000000000000000000000000000000000..ac51fcdc49408720b99c105b6e9bc9c9c0a1112e --- /dev/null +++ b/samples/texts/5841221/page_14.md @@ -0,0 +1,41 @@ +## 7 $\omega$-Sensibility + +In this section we will show that $S_J$ is $\omega$-sensible, the most difficult step in proving that $J = \mathcal{J}_\Sigma$ is a quasimodel. In other words, we must show that, given $w \in |\mathcal{J}_\Sigma|$ and $\diamondsuit\psi \in \ell^+(w)$, there is a finite path + +$$w = w_0 S_J w_1 S_J \dots S_J w_n,$$ + +where $\psi \in \ell^+(w_n)$ and $w_i \in |\mathcal{J}_\Sigma|$ for all $i \le n$. + +**Definition 7.1** Let $\Sigma \subseteq \mathcal{L}_{\diamondsuit}$ and $w, v \in |\mathcal{J}_\Sigma|$. Say that $v$ is *reachable* from $w$ if there is a finite path + +$$\vec{u} = (u_0, \dots, u_n)$$ + +of possible moments with $u_0 = w$, $u_n = v$, and $u_i S_J u_{i+1}$ for all $i < n$. We denote the set of all possible moments that are reachable from $w$ by $R(w)$. + +**Lemma 7.2** If $\Sigma \subseteq \mathcal{L}_{\diamondsuit}$ and $w \in |\mathcal{J}_\Sigma|$ then + +$$\vdash \bigcirc \bigwedge_{v \in R(w)} \mathrm{Sim}(v) \rightarrow \bigwedge_{v \in R(w)} \mathrm{Sim}(v).$$ + +**Proof.** Let $\mathcal{I} = \mathcal{I}_\Sigma$. By Proposition 5.4.v we have that, for all $v \in R(w)$, + +$$\vdash \bigcirc \bigwedge_{v \in S_\mathcal{I} u} \mathrm{Sim}(u) \rightarrow \mathrm{Sim}(v).$$ + +Now, if $u \notin |\mathcal{J}_\Sigma|$, then $\vdash \mathrm{Sim}(u)$, hence by (R2) $\vdash \mathcal{\Diamond} \mathrm{Sim}(u)$, and we can remove $\mathrm{Sim}(u)$ from the conjunction using Lemma 2.2 and propositional reasoning. Since $v \in R(w)$ was arbitrary, this shows that + +$$\vdash \bigcirc \bigwedge_{v \in R(w)} \mathrm{Sim}(v) \rightarrow \bigwedge_{v \in R(w)} \mathrm{Sim}(v).$$ + +□ + +From this we obtain the following, which evidently implies $\omega$-sensibility: + +**Proposition 7.3** If $w \in |\mathcal{J}_\Sigma|$ and $\diamondsuit\psi \in \ell^+(w)$, then there is $v \in R(w)$ such that $\psi \in \ell^+(v)$. + +**Proof.** Towards a contradiction, assume that $w \in J_\Sigma$ and $\diamondsuit\psi \in \ell^+(w)$ but, for all $v \in R(w)$, $\psi \in \ell^-(w)$. + +By Lemma 7.2, + +$$\vdash \bigcirc \bigwedge_{v \in R(w)} \mathrm{Sim}(v) \rightarrow \bigwedge_{v \in R(w)} \mathrm{Sim}(v).$$ + +By the $\diamondsuit$-induction rule (R4), + +$$\vdash \diamondsuit \bigwedge_{v \in R(w)} \mathrm{Sim}(v) \rightarrow \bigwedge_{v \in R(w)} \mathrm{Sim}(v);$$ \ No newline at end of file diff --git a/samples/texts/5841221/page_15.md b/samples/texts/5841221/page_15.md new file mode 100644 index 0000000000000000000000000000000000000000..e14c924fab005347b04837c0e02a0682d641c13a --- /dev/null +++ b/samples/texts/5841221/page_15.md @@ -0,0 +1,43 @@ +in particular, + +$$ \vdash \diamond \bigwedge_{v \in R(w)} \mathrm{Sim}(v) \rightarrow \mathrm{Sim}(w). \qquad (4) $$ + +Now let $v \in R(w)$. By Proposition 5.4.i and the assumption that $\psi \in \ell^{-}(v)$ we have that + +$$ \vdash \psi \rightarrow \mathrm{Sim}(v), $$ + +and since $v$ was arbitrary, + +$$ \vdash \psi \rightarrow \bigwedge_{v \in R(w)} \mathrm{Sim}(v). $$ + +Using distributivity (R3) we further have that + +$$ \vdash \diamond \psi \rightarrow \diamond \bigwedge_{v \in R(w)} \mathrm{Sim}(v). $$ + +This, along with (4), shows that + +$$ \vdash \diamond \psi \rightarrow \mathrm{Sim}(w); $$ + +however, by Proposition 5.4.ii and our assumption that $\diamond \psi \in \ell^{+}(w)$ we have that + +$$ \vdash (\diamond \psi \rightarrow \mathrm{Sim}(w)) \rightarrow \mathrm{Sim}(w), $$ + +hence by modus ponens we obtain $\vdash \mathrm{Sim}(w)$, which contradicts the assumption that $w \in J_{\Sigma}$. We conclude that there can be no such $w$. $\square$ + +**Corollary 7.4** *Given any finite set $\Sigma$ of formulas, $\mathcal{J}_{\Sigma}$ is a quasimodel.* + +**Proof.** Let $\mathcal{J} = \mathcal{J}_{\Sigma}$. By Lemma 6.3, $|\mathcal{J}|$ is upwards closed in $|\mathcal{I}_{\Sigma}|$ and $S_{\mathcal{J}}$ is serial, while by Proposition 7.3, $S_{\mathcal{J}}$ is $\omega$-sensible. It follows from Lemma 3.6 that $\mathcal{J}$ is a quasimodel. $\square$ + +We are now ready to prove that $\mathrm{ITL}_{\diamond}^{0}$ is complete. + +**Theorem 7.5** If $\varphi \in \mathcal{L}_{\diamond}$ is valid over the class of expanding posets, then $\mathrm{ITL}_{\diamond}^{0} \vdash \varphi$. + +**Proof.** We prove the contrapositive. Suppose $\varphi$ is an unprovable formula and let + +$$ W = \{w \in \mathcal{I}_{\text{sub}(\varphi)} : \varphi \in \ell^{-}(w)\}. $$ + +Then, by Proposition 5.4.iv we have that + +$$ \vdash \bigwedge_{w \in W} \mathrm{Sim}(w) \rightarrow \varphi; $$ + +since $\varphi$ is unprovable, it follows that some $w^* \in W$ is possible and hence $w^* \in J_{\text{sub}(\varphi)}$. By Corollary 7.4, $\mathcal{J}_{\text{sub}(\varphi)}$ is a quasimodel, so that by Theorem 3.7, $\varphi$ is falsifiable in some expanding poset. $\square$ \ No newline at end of file diff --git a/samples/texts/5841221/page_16.md b/samples/texts/5841221/page_16.md new file mode 100644 index 0000000000000000000000000000000000000000..7e2830256648f717c6c7ad3bfec154b6864f22d0 --- /dev/null +++ b/samples/texts/5841221/page_16.md @@ -0,0 +1,15 @@ +Concluding remarks + +We have provided a sound and complete axiomatization for the □-free fragment of the expanding intuitionistic temporal logic ITLe. With this we may develop syntactic techniques to decide validity over the class of expanding posets, complementing the semantic methods presented by Boudou and the authors [4] and possibly leading to an elementary decision procedure. + +Many questions remain open in this direction, perhaps most notably an extension to the full language with □. This is likely to be a much more challenging problem, as the language with ‘henceforth’ can distinguish between Kripke and topological models and hence methods based on non-deterministic quasimodels do not seem feasible. Along these lines, one can consider the LTL connectives *until* and *release*. In an upcoming contribution, we study the decidability of the full LTL language with respect to the ITLe semantics. It is likely that the techniques presented here can be extended to handle the ‘until’ operator, which can be used to define ♦; on the other hand, ‘release’ can be used to define □, and axiomatizing it should be at least as difficult as axiomatizing the logic with ‘henceforth’. + +The question of axiomatizing ITLP (with persistent domains) is also of interest, but here it is possible that the logic is not even axiomatizable in principle. It may be that methods from products of modal logics [23] can be employed here; for example, one can reduce tiling or related problems to show that certain products such as LTL × S4 are not computably enumerable. However, even if such a reduction is possible, working over the more limited intuitionistic language poses an additional challenge. Even computational lower bounds for these logics are not yet available, aside from the trivial PSPACE bound obtained from the purely propositional fragment. + +Appendix + +A Simulation formulas + +In this appendix, we show that there exist $\mathcal{L}_{\diamond}$ formulas defining points in finite frames up to simulability, i.e. that if $\mathcal{W}$ is a finite frame and $w \in |\mathcal{W}|$, there exists a formula $\text{Sim}(w)$ such that for all labelled frames $\mathcal{M}$ and all $x \in |\mathcal{M}|$, $\mathcal{M}, x \models x$ if and only if $(\mathcal{W}, w) \models (\mathcal{M}, x)$. In contrast, such formulas do not exist in the classical modal language for finite S4 models [11], but they can be constructed using a polyadic ‘tangled’ modality. This tangled modality was proven to be expressively equivalent to the $\mu$-calculus over transitive frames by Dawar and Otto [8], and later axiomatized for several classes of models by Fernández-Duque [12] and Goldblatt and Hodkinson [18,19]. + +Simulation formulas were used in [13] to provide a sound and complete axiomatization of *dynamic topological logic* [1,22], a classical tri-modal system closely related to ITLe, where the intuitionistic implication is replaced by an S4 modality. One can use the fact that simulability is not definable over the modal language to prove that the natural axiomatization suggested by Kremer and Mints [22] of dynamic topological logic was incomplete for its topological, \ No newline at end of file diff --git a/samples/texts/5841221/page_17.md b/samples/texts/5841221/page_17.md new file mode 100644 index 0000000000000000000000000000000000000000..2809e344009fc1378557e6ba78018748572d42d2 --- /dev/null +++ b/samples/texts/5841221/page_17.md @@ -0,0 +1,27 @@ +let alone its Kripke, semantics [14]. + +While simulability is not modally definable, it is definable over the language of intuitionistic logic, as finite frames [9] (and hence models) are already definable up to simulation in the intuitionistic language. This may be surprising, as the intuitionistic language is less expressive than the modal language; however, intuitionistic models are posets rather than arbitrary preorders, and this allows us to define simulability formulas by recursion on <. + +**Definition A.1** Fix $\Sigma \subseteq \mathcal{L}_{\diamond}$ and let $\mathcal{W}$ be a finite $\Sigma$-labeled frame. Given $w \in |\mathcal{W}|$, we define a formula $\text{Sim}(w)$ by backwards induction on $\leq = \ll_{\mathcal{W}}$ by + +$$ \text{Sim}(w) = \bigwedge \ell^+(w) \rightarrow \bigvee \ell^-(w) \lor \bigvee_{v>w} \text{Sim}(v). $$ + +**Proposition A.2** Given $\Sigma \subseteq \Delta \subseteq \mathcal{L}_{\diamond}$, a finite $\Sigma$-labelled frame $\mathcal{W}$, a $\Delta$-labelled frame $\mathcal{X}$ and $w \in |\mathcal{W}|$, $x \in |\mathcal{X}|$: + +(i) if $\text{Sim}(w) \in \ell_X^-(x)$ then there is $y \ge x$ such that $(\mathcal{W}, w) \models (\mathcal{X}, y)$, and + +(ii) if there is $y \ge x$ such that $(\mathcal{W}, w) \models (\mathcal{X}, y)$ then $\text{Sim}(w) \notin \ell_X^+(x)$. + +**Proof.** Each claim is proved by backward induction on $\leq$. + +(i) Let us first consider the base case, when there is no $v > w$. Assume that $\text{Sim}(w) \in \ell^-(x)$. From the definition of labelled frame $\bigwedge \ell_W^+(w) \in \ell_X^+(y)$ and $\bigvee \ell_W^-(w) \in \ell_X^-(y)$ for some $y \ge x$. From the definition of type it follows that $\ell_W^+(w) \subseteq \ell_X^+(y)$ and $\ell_W^-(w) \subseteq \ell_X^-(y)$, so that $\ell_W(w) \subseteq_T \ell_X(y)$. It follows that $E \stackrel{\text{def}}{=} \{(w, y)\}$ is a simulation, so $(\mathcal{W}, w) \models (\mathcal{X}, y)$. + +For the inductive step, let us assume that the lemma is proved for all $v > w$. Assume that $\text{Sim}(w) \in \ell_X^-(x)$. From Condition (b) it follows that $\bigwedge \ell_W^+(w) \in \ell_X^+(y)$, $\bigvee \ell_W^-(w) \in \ell_X^-(y)$ and $\bigvee_{v w$. By induction hypothesis we conclude that for all $v > w$, there exists a simulation $E_v$ such that $v E_{E_v} z_v$ for some $z_v \ge y$. Let + +$$ E \stackrel{\text{def}}{=} \{(w, y)\} \cup \bigcup_{v>w} E_v . $$ + +The reader may check that $E$ is a simulation and that $w E y \ge x$, so that $(\mathcal{W}, w) \models (\mathcal{X}, y)$, as needed. + +(ii) For the base case, assume that $(\mathcal{W}, w) \models (\mathcal{X}, y)$ for some $y \ge x$, so there exists a simulation $E$ such that $w E y$. It follows that $\ell_W^+(w) \subseteq \ell_X^+(y)$ and $\ell_W^-(w) \subseteq \ell_X^-(y)$. From conditions (d) and (e) of the definition of type (Definition 3.1), it follows that $\bigwedge \ell_W^+(w) \notin \ell_X^-(y)$ and $\bigvee \ell_W^-(w) \notin \ell_X^+(y)$. But then, condition (f) gives us $\text{Sim}(w) \notin \ell_X^+(y)$, so $\text{Sim}(w) \notin \ell_X^+(x)$. + +For the inductive step, by the same reasoning as in the base case it follows that $\bigwedge \ell_W^+(w) \notin \ell_X^-(y)$ and $\bigvee \ell_W^-(w) \notin \ell_X^+(y)$. Now, let $v$ be such that $v > w$. Since $E$ is forward confluent then $v E z_v$ for some $z_v \ge y$. By induction \ No newline at end of file diff --git a/samples/texts/5841221/page_18.md b/samples/texts/5841221/page_18.md new file mode 100644 index 0000000000000000000000000000000000000000..fbc3982647742a94c25e8b687a60c412017d2fdd --- /dev/null +++ b/samples/texts/5841221/page_18.md @@ -0,0 +1,35 @@ +hypothesis, $\mathrm{Sim}(v) \notin \ell^+(z_v)$, so $\mathrm{Sim}(v) \notin \ell^+(y)$. Since $v$ was arbitrary we conclude that $\bigvee_{v>w} \mathrm{Sim}(v) \notin \ell^+(y)$. Finally, from condition (f) of Definition 3.1 and the fact that $y \le x$ we get that $\mathrm{Sim}(w) \notin \ell^+(x)$. $\square$ + +## B The finite initial frame + +In this appendix we review the construction of the structure $\mathcal{I}_\Sigma$ of Theorem 5.2. The worlds of this structure are called *irreducible $\Sigma$-moments*. The intuition is that a $\Sigma$-moment represents all the information that holds at the same 'moment of time'. Recall that we write $\Sigma \subseteq \mathcal{L}_\diamond$ if $\Sigma \subseteq \mathcal{L}_\diamond$ is finite and closed under subformulas. We omit all proofs, which can be found in [15]. + +**Definition B.1** Let $\Sigma \subseteq \mathcal{L}_\diamond$. A $\Sigma$-moment is a $\Sigma$-labelled space $w$ such that ($|w|, \llcorner w$) is a finite tree with unique root $r_w$. + +Note that moments can be arbitrarily large. In order to obtain a finite structure we will restrict the set of moments to those that are, in a sense, no bigger than they need to be. To be precise, we want them to be minimal with respect to $\trianglelefteq$, which we define below. + +**Definition B.2** Let $\Sigma \subseteq \mathcal{L}_\diamond$ and $w, v$ be $\Sigma$-moments. We write + +(i) $w \sqsubseteq v$ if $|w| \le |v|$, $\llcorner w = \llcorner v \uparrow |w|$, and $\ell_w = \ell_v \uparrow |w|$; + +(ii) $w \le v$ if $w \sqsubseteq v$ and there is a forward confluent, surjective function $\pi: |v| \to |w|$ such that $\ell_v(v) = \ell_w(\pi(v))$ for all $v \in |v|$ and $\pi^2 = \pi$. We say that $w$ is a reduct of $v$ and $\pi$ is a reduction. + +Note that the condition $\pi^2 = \pi$ is equivalent to requiring $\pi(w) = w$ whenever $w \in |w|$. Irreducible moments are the minimal moments under $\trianglelefteq$. + +**Definition B.3** Let $\Sigma \subseteq \mathcal{L}_\diamond$. A $\Sigma$-moment $w$ is irreducible if whenever $w \trianglelefteq v$, it follows that $w = v$. The set of irreducible moments is denoted $I_\Sigma$. + +To view $I_\Sigma$ as a labeled frame, we need to equip it with a suitable partial order. + +**Definition B.4** Let $w \in I_\Sigma$. For $w \in |w|$, let $w[w] = w \uparrow^\uparrow w$, i.e., + +$$ w[w] = (\uparrow w , \llcorner w \uparrow^\uparrow w , \ell_w \uparrow^\uparrow w). $$ + +We write $v \le w$ if $v = w[w]$ for some $w \in |w|$. + +It is shown in [15] that if $w$ is irreducible and $v \le w$, $v$ is irreducible as well. To obtain a weak quasimodel, it remains to define a sensible relation on $I_\Sigma$. + +**Definition B.5** If $\Sigma \subseteq \mathcal{L}_\diamond$ and $w, v \in I_\Sigma$, we define $v \mapsto w$ if there exists a sensible, forward-confluent relation $S \subseteq |v| \times |w|$ such that $r_v S r_w$. + +We are now ready to define our initial weak quasimodel. + +**Definition B.6** Given $\Sigma \subseteq \mathcal{L}_\diamond$, we define $\mathcal{I} = I_\Sigma$ to be the structure ($|\mathcal{I}|, \llcorner_\mathcal{I}, S_\mathcal{I}, \ell_\mathcal{I}$), where $|\mathcal{I}| = I_\Sigma$, $v \llcorner_\mathcal{I} w$ if and only if $v \ge w$, $w S_\mathcal{I} v$ if and only if $w \mapsto v$, and $\ell_\mathcal{I}(w) = \ell_w(r_w)$. \ No newline at end of file diff --git a/samples/texts/5841221/page_19.md b/samples/texts/5841221/page_19.md new file mode 100644 index 0000000000000000000000000000000000000000..38caa653d37a177f2738168a0f91d722343f89b6 --- /dev/null +++ b/samples/texts/5841221/page_19.md @@ -0,0 +1,44 @@ +Note that in this construction, the moments accessible from **w** are smaller +than **w**, and thus we use the reverse partial order to interpret implication. The +structure $\mathcal{I}_{\Sigma}$ is always finite, a fact that is used in an essential way in our +completeness proof. Below, $2_m^n$ denotes the superexponential function. + +**Theorem B.7** Let $\Sigma \in \mathcal{L}_{\diamond}$ and let $s = \# \Sigma$. Then, $\mathcal{I}_{\Sigma}$ is a weak $\Sigma$-quasimodel and $\#\mathcal{I}_{\Sigma} \le 2^{s^2+s}$. Moreover, if $\Sigma \in \mathcal{L}_{\diamond}$ and $\mathcal{A}$ is any deterministic weak quasimodel then $\vDash \mathcal{I}_{\Sigma} \times |\mathcal{A}|$ is a surjective dynamic simulation. + +In fact, the claim proven in Fernández-Duque [15] is more general in that +$\mathcal{A}$ may belong to a wider class of topological weak quasimodels, but this special +case is sufficient for our purposes. + +References + +[1] Artëmov, S. N., J. M. Davoren and A. Nerode, *Modal logics and topological semantics for hybrid systems*, Technical Report MSI 97-05 (1997). + +[2] Balbiani, P., J. Boudou, M. Diéguez and D. Fernández-Duque, *Bisimulations for intuitionistic temporal logics*, arXiv 1803.05078 (2018). + +[3] Balbiani, P. and M. Diéguez, *Temporal here and there*, in: M. Loizos and A. Kakas, editors, *Logics in Artificial Intelligence* (2016), pp. 81–96. + +[4] Boudou, J., M. Diéguez and D. Fernández-Duque, *A decidable intuitionistic temporal logic*, in: *26th EACSL Annual Conference on Computer Science Logic (CSL)*, 2017, pp. 14:1–14:17. + +[5] Boudou, J., M. Diéguez, D. Fernández-Duque and F. Romero, *Axiomatic systems and topological semantics for intuitionistic temporal logic*, arXiv 1803.05077 (2018). + +[6] Davies, R., *A temporal-logic approach to binding-time analysis*, in: *Proceedings, 11th Annual IEEE Symposium on Logic in Computer Science, New Brunswick, New Jersey, USA*, 1996, pp. 184–195. + +[7] Davoren, J. M., *On intuitionistic modal and tense logics and their classical companion logics: Topological semantics and bisimulations*, Annals of Pure and Applied Logic **161** (2009), pp. 349–367. + +[8] Dawar, A. and M. Otto, *Modal characterisation theorems over special classes of frames*, Annals of Pure and Applied Logic **161** (2009), pp. 1–42, extended journal version LICS 2005 paper. + +[9] de Jongh, D. and F. Yang, *Jankov's theorems for intermediate logics in the setting of universal models*, in: *Logic, Language, and Computation - 8th International Tbilisi Symposium on Logic, Language, and Computation*, TbiLLC 2009, Bakuriani, Georgia. Revised Selected Papers, 2009, pp. 53–76. + +[10] Fernández-Duque, D., *Non-deterministic semantics for dynamic topological logic*, Annals of Pure and Applied Logic **157** (2009), pp. 110–121. + +[11] Fernández-Duque, D., *On the modal definability of simulability by finite transitive models*, Studia Logica **98** (2011), pp. 347–373. + +[12] Fernández-Duque, D., *Tangled modal logic for spatial reasoning*, in: T. Walsh, editor, *Proceedings of IJCAI*, 2011, pp. 857–862. + +[13] Fernández-Duque, D., *A sound and complete axiomatization for dynamic topological logic*, Journal of Symbolic Logic **77** (2012), pp. 947–969. + +[14] Fernández-Duque, D., *Non-finite axiomatizability of dynamic topological logic*, ACM Transactions on Computational Logic **15** (2014), pp. 4:1–4:18. + +[15] Fernández-Duque, D., *The intuitionistic temporal logic of dynamical systems*, arXiv **1611.06929** (2016). + +[16] Fischer Servi, G., *Axiomatisations for some intuitionistic modal logics*, in: *Rendiconti del Seminario Matematico* (1984), pp. 179–194. \ No newline at end of file diff --git a/samples/texts/5841221/page_2.md b/samples/texts/5841221/page_2.md new file mode 100644 index 0000000000000000000000000000000000000000..e222a393f42d1f37096c68fccc0a9e1e836b71e5 --- /dev/null +++ b/samples/texts/5841221/page_2.md @@ -0,0 +1,23 @@ +linear temporal logic. Versions of such a logic in finite time have been studied by Kojima and Igarashi [21] and Kamide and Wansing [20]. Nevertheless, logics over infinite time have proven to be rather difficult to understand, in no small part due to their similarity to intuitionistic modal logics such as IS4, whose decidability is still an open problem [28]. + +In recent times, Balbiani, Boudou and the authors have made some advances in this direction, showing that the intermediate logic of temporal here-and-there is decidable and enjoys a natural axiomatization [3] and identifying two conservative temporal extensions of intuitionistic logic, denoted *ITL*e and *ITL*p (see §2.1). These logics are based on the temporal language with ∘ ('next'), ◊ ('eventually') and □ ('henceforth'); note that unlike in the classical case, the latter two are not inter-definable [2]. Both logics are given semantically and interpreted over the class of *dynamic posets*, structures of the form $\mathcal{F} = (W, \prec, S)$ where $\prec$ is a partial order on $W$ used to interpret implication and $S: W \to W$ is used to interpret tenses. If $w \prec v$ implies that $S(w) \prec S(v)$ we say that $\mathcal{F}$ is an *expanding poset*; *ITL*e is then defined to be the set of valid formulas for the class of expanding posets, while *ITL*p is the logic of *persistent posets*, where $\mathcal{F}$ has the additional *backward confluence* condition stating that if $v \succ S(w)$, then there is $u \succ w$ such that $S(u) = v$. + +Unlike *ITL*e, the logic *ITL**p* satisfies the familiar Fischer Servi axioms [16]; nevertheless, *ITL*e has some technical advantages. We have shown that *ITL*e has the small model property while *ITL**p* does not [4]; this implies that *ITL*e is decidable. It is currently unknown if *ITL**p* is even axiomatizable, and in fact its modal cousin *LTL* × S4 is not computably enumerable [17]. On the other hand, while *ITL*e is axiomatizable in principle, the decision procedure we currently know uses model-theoretic techniques and does not suggest a natural axiomatization. + +In [5] we laid the groundwork for an axiomatic approach to intuitionistic temporal logics, identifying a family of natural axiom systems that were sound for different classes of structures, including a ‘minimal’ logic *ITL*0 based on a standard axiomatization for *LTL*. There we consider a wider class of models based on topological semantics and show that *ITL*0 is sound for these semantics, while + +(a) $\Box(p \lor q) \rightarrow \Diamond p \lor \Box q$ + +(b) $\Box(\Box p \rightarrow p) \land \Box(p \lor q) \rightarrow p \lor \Box q$ + +are Kripke-, but not topologically, valid, from which it follows that these principles are not derivable in *ITL*0. + +On the other hand, it is also shown in [5] that for $\varphi \in \mathcal{L}_{\diamond}$, the following are equivalent: + +(i) $\varphi$ is topologically valid, + +(ii) $\varphi$ is valid over the class of expanding posets, + +(iii) $\varphi$ is valid over the class of finite quasimodels. + +Quasimodels are discussed in §3 and are the basis of the completeness for \ No newline at end of file diff --git a/samples/texts/5841221/page_20.md b/samples/texts/5841221/page_20.md new file mode 100644 index 0000000000000000000000000000000000000000..bd7b8bcce70cfe3ff56f06be659adeae39027579 --- /dev/null +++ b/samples/texts/5841221/page_20.md @@ -0,0 +1,23 @@ +[17] Gabelaia, D., A. Kurucz, F. Wolter and M. Zakharyaschev, *Non-primitive recursive decidability of products of modal logics with expanding domains*, Annals of Pure and Applied Logic **142** (2006), pp. 245–268. + +[18] Goldblatt, R. and I. Hodkinson, *Spatial logic of tangled closure operators and modal mu-calculus*, Annals of Pure and Applied Logic **168** (2017), pp. 1032–1090. + +[19] Goldblatt, R. and I. M. Hodkinson, *The finite model property for logics with the tangle modality*, Studia Logica **106** (2018), pp. 131–166. + +[20] Kamide, N. and H. Wansing, *Combining linear-time temporal logic with constructiveness and paraconsistency*, Journal of Applied Logic **8** (2010), pp. 33–61. + +[21] Kojima, K. and A. Igarashi, *Constructive linear-time temporal logic: Proof systems and Kripke semantics*, Information and Computation **209** (2011), pp. 1491–1503. + +[22] Kremer, P. and G. Mints, *Dynamic topological logic*, Annals of Pure and Applied Logic **131** (2005), pp. 133–158. + +[23] Kurucz, A., F. Wolter, M. Zakharyaschev and D. M. Gabbay, “Many-Dimensional Modal Logics: Theory and Applications”, Volume 148 (Studies in Logic and the Foundations of Mathematics), North Holland, 2003. + +[24] Lichtenstein, O. and A. Pnueli, *Propositional temporal logics: Decidability and completeness*, Logic Journal of the IGPL **8** (2000), pp. 55–85. + +[25] Maier, P., *Intuitionistic LTL and a new characterization of safety and liveness*, in: J. Marcinkowski and A. Tarlecki, editors, *18th EACSL Annual Conference on Computer Science Logic (CSL) (2004)*, pp. 295–309. + +[26] Mints, G., “A Short Introduction to Intuitionistic Logic,” University Series in Mathematics, Springer, 2000. + +[27] Nishimura, H., *Semantical analysis of constructive PDL*, Publications of the Research Institute for Mathematical Sciences, Kyoto University **18** (1982), pp. 427–438. + +[28] Simpson, A. K., “The Proof Theory and Semantics of Intuitionistic Modal Logic,” Ph.D. thesis, University of Edinburgh, UK (1994). \ No newline at end of file diff --git a/samples/texts/5841221/page_3.md b/samples/texts/5841221/page_3.md new file mode 100644 index 0000000000000000000000000000000000000000..3a372d949c1a7ed371febd298808d5f2c988d04c --- /dev/null +++ b/samples/texts/5841221/page_3.md @@ -0,0 +1,15 @@ +dynamic topological logic presented in [13], which works for topological, but not Kripke, semantics. This suggests that similar techniques could be employed to give a completeness proof for a natural logic over the □-free fragment, but not necessarily over the full temporal language; in fact, we do not currently have a useful notion of quasimodel in the presence of □. Moreover, (a) and (b) are not valid in most intuitionistic modal logics, and there is little reason at this point to suspect that no other independent validities are yet to be discovered. For this reason, in this manuscript we restrict our attention to the □-free fragment of the temporal language, which we denote $\mathcal{L}_{\diamond}$, and we will work with the logic ITL$_{\diamond}^0$, a □-free version of ITL$^0$. + +## 1.2 Our main result + +The goal of this article is to prove that $\mathrm{ITL}_{\diamond}^0$ is complete for the class of expanding posets (Theorem 7.5). The completeness proof follows the general scheme of that for linear temporal logic [24]: a set of 'local states', which we will call *moments*, is defined, where a moment is a representation of a potential point in a model (or, in our case, a quasimodel). To each moment $w$ one then assigns a characteristic formula $\chi(w)$ in such a way that $\chi(w)$ is consistent if and only if $w$ can be included in a model, from which completeness can readily be deduced. + +In the LTL setting, a moment is simply a maximal consistent subset of a suitable finite set $\Sigma$ of formulas. For us a moment is instead a finite labelled tree, and the formula $\chi(w)$ must characterize $w$ up to simulation; for this reason we will henceforth write $\mathrm{Sim}(w)$ instead of $\chi(w)$. The required formulas $\mathrm{Sim}(w)$ can readily be constructed in $\mathcal{L}_{\diamond}$ (Proposition 5.3). + +Note that it is *failure* of $\mathrm{Sim}(w)$ that characterizes the property of simulating $w$, hence the *possible* states will be those moments $w$ such that $\mathrm{Sim}(w)$ is unprovable. The set of possible moments will form a quasimodel falsifying a given unprovable formula $\varphi$ (Corollary 7.4), from which it follows that such a $\varphi$ is falsified on some model as well (Theorem 3.7). Thus any unprovable formula is falsifiable, and Theorem 7.5 follows. + +### Layout + +Section 2 introduces the syntax and semantics of ITL$^e$, and Section 3 discusses labelled structures, which generalize both models and quasimodels. Section 4 discusses the canonical model, which properly speaking is a deterministic weak quasimodel. Section 5 reviews simulations and dynamic simulations, including their definability in the intuitionistic language. Section 6 constructs the initial quasimodel and establishes its basic properties, but the fact that it is actually a quasimodel is proven only in Section 7 where it is shown that the quasimodel is $\omega$-sensible, i.e. it satisfies the required condition to interpret $\diamond$. The completeness of $\mathrm{ITL}_{\diamond}^0$ follows from this. + +Appendix A gives an explicit construction of simulation formulas and Appendix B reviews the construction of the initial weak quasimodel from [15]. \ No newline at end of file diff --git a/samples/texts/5841221/page_4.md b/samples/texts/5841221/page_4.md new file mode 100644 index 0000000000000000000000000000000000000000..5e1f510425c258e4193c3d2af54e644721a3bad2 --- /dev/null +++ b/samples/texts/5841221/page_4.md @@ -0,0 +1,33 @@ +## 2 Syntax and semantics + +Fix a countably infinite set $\mathbb{P}$ of 'propositional variables'. The language $\mathcal{L}$ of intuitionistic (linear) temporal logic ITL is given by the grammar + +$$ \varphi, \psi ::= p \mid \bot \mid \varphi \land \psi \mid \varphi \lor \psi \mid \varphi \rightarrow \psi \mid \circ \varphi \mid \diamond \varphi \mid \Box \varphi, $$ + +where $p \in \mathbb{P}$. As usual, we use $\neg\varphi$ as a shorthand for $\varphi \rightarrow \bot$ and $\varphi \leftrightarrow \psi$ as a shorthand for $(\varphi \rightarrow \psi) \land (\psi \rightarrow \varphi)$. We read $\circ$ as 'next', $\diamond$ as 'eventually', and $\Box$ as 'henceforth'. Given any formula $\varphi$, we denote the set of subformulas of $\varphi$ by $\text{sub}(\varphi)$. We will work mainly in the language $\mathcal{L}_{\diamond}$, defined as the sublanguage of $\mathcal{L}$ without the modality $\Box$, although the full language will be discussed occasionally. + +### 2.1 Semantics + +Formulas of $\mathcal{L}$ are interpreted over expanding posets. An *expanding poset* is a tuple $\mathcal{D} = (|\mathcal{D}|, \leq_D, S_\mathcal{D})$, where $|\mathcal{D}|$ is a non-empty set of moments, $\leq_D$ is a partial order over $|\mathcal{D}|$, and $S_\mathcal{D}$ is a function from $|\mathcal{D}|$ to $|\mathcal{D}|$ satisfying the *forward confluence condition* that for all $w, v \in |\mathcal{D}|$, if $w \leq_D v$ then $S_\mathcal{D}(w) \leq S_\mathcal{D}(v)$. We will omit the subindices in $\leq_D$, $S_\mathcal{D}$ when $\mathcal{D}$ is clear from context and write $v < w$ if $v \leq w$ and $v \neq w$. An *intuitionistic dynamic model*, or simply *model*, is defined to be a tuple $\mathcal{M} = (|\mathcal{M}|, \leq_M, S_\mathcal{M}, V_\mathcal{M})$ consisting of an expanding poset equipped with a valuation function $V_\mathcal{M}$ from $|\mathcal{M}|$ to sets of propositional variables that is $\leq$-monotone in the sense that for all $w, v \in |\mathcal{M}|$, if $w \leq v$ then $V_\mathcal{M}(w) \subseteq V_\mathcal{M}(v)$. In the standard way, we define $S_\mathcal{M}^0(w) = w$ and $S_\mathcal{M}^{k+1}(w) = S(S_\mathcal{M}^k(w))$. Then we define the satisfaction relation $\models$ inductively by: + +(i) $\mathcal{M}, w \models p$ if $p \in V_\mathcal{M}(w)$; + +(ii) $\mathcal{M}, w \models \bot$; + +(iii) $\mathcal{M}, w \models \varphi \land \psi$ if $\mathcal{M}, w \models \varphi$ and $\mathcal{M}, w \models \psi$; + +(iv) $\mathcal{M}, w \models \varphi \lor \psi$ if $\mathcal{M}, w \models \varphi$ or $\mathcal{M}, w \models \psi$; + +(v) $\mathcal{M}, w \models \circ\varphi$ if $\mathcal{M}, S_\mathcal{M}(w) \models \varphi$; + +(vi) $\mathcal{M}, w \models \varphi \rightarrow \psi$ if $\forall v \geq w$, if $\mathcal{M}, v \models \varphi$ then $\mathcal{M}, v \models \psi$; + +(vii) $\mathcal{M}, w \models \diamond\varphi$ if there exists $k$ such that $\mathcal{M}, S_\mathcal{M}^k(w) \models \varphi$; + +(viii) $\mathcal{M}, w \models \Box\varphi$ if for all $k$, $\mathcal{M}, S_\mathcal{M}^k(w) \models \varphi$. + +As usual, a formula $\varphi$ is valid over a class of models $\Omega$ if, for every world $w$ of every model $\mathcal{M} \in \Omega$, $\mathcal{M}, w \models \varphi$. The set of valid formulas over an arbitrary expanding poset will be called ITLe, or *expanding intuitionistic temporal logic*; the terminology was coined in [4] and is a reference to the closely-related *expanding products* of modal logics [17]. The main result of [4] is the following. + +**Theorem 2.1** |ITLe is decidable. + +Nevertheless, Theorem 2.1 is proved using purely model-theoretic techniques that do not suggest an axiomatization in an obvious way. In [5] we \ No newline at end of file diff --git a/samples/texts/5841221/page_5.md b/samples/texts/5841221/page_5.md new file mode 100644 index 0000000000000000000000000000000000000000..b5b1601ffc73e101c246363da2c9c390c5ff8898 --- /dev/null +++ b/samples/texts/5841221/page_5.md @@ -0,0 +1,41 @@ +introduced the axiomatic system ITL0, inspired by standard axiomatizations for LTL. As we will see, adapting this system to *L* yields a sound and complete deductive calculus for the class of expanding posets. + +## 2.2 The axiomatization + +Our axiomatization obtained from propositional intuitionistic logic [26] by adding standard axioms and inference rules of LTL [24], although modified to use ◇ instead of □. To be precise, the logic ITL0 is the least set of *L*-formulas closed under the following axiom schemes and rules: + +(A1) All intuitionistic tautologies + +(A2) $\neg \Box \top$ + +(A3) $\mathcal{O}\varphi \wedge \mathcal{O}\psi \rightarrow \mathcal{O}(\varphi \wedge \psi)$ + +(R1) $\frac{\varphi \quad \varphi \rightarrow \psi}{\psi}$ + +(R2) $\frac{\varphi}{\mathcal{O}\varphi}$ + +(A4) $\mathcal{O}(\varphi \vee \psi) \rightarrow \mathcal{O}\varphi \vee \mathcal{O}\psi$ + +(A5) $\mathcal{O}(\varphi \rightarrow \psi) \rightarrow (\mathcal{O}\varphi \rightarrow \mathcal{O}\psi)$ + +(A6) $\varphi \vee \mathcal{O}\diamond\varphi \rightarrow \diamond\varphi$ + +(R3) $\frac{\varphi \rightarrow \psi}{\diamond\varphi \rightarrow \diamond\psi}$ + +(R4) $\frac{\mathcal{O}\varphi \rightarrow \varphi}{\diamond\varphi \rightarrow \varphi}$ + +The axioms (A2)-(A5) are standard for a functional modality. Axiom (A6) is the dual of $\Box\varphi \rightarrow \varphi \wedge \mathcal{O}\Box\varphi$. The rule (R3) replaces the dual K-axiom $\Box(\varphi \rightarrow \psi) \rightarrow (\diamond\varphi \rightarrow \diamond\psi)$, while (R4) is dual to the induction rule $\frac{\varphi \rightarrow \mathcal{O}\varphi}{\varphi \rightarrow \Box\varphi}$. As we show next, we can also derive the converses of some of these axioms. Below, for a set of formulas Γ we define $\mathcal{O}\Gamma = \{\mathcal{O}\varphi : \varphi \in \Gamma\}$, and empty conjunctions and disjunctions are defined by $\bigwedge\emptyset = \top$ and $\vee\emptyset = \bot$. + +**Lemma 2.2** Let $\varphi \in L_{\diamond}$ and $\Gamma \subseteq L_{\diamond}$ be finite. Then, the following are derivable in ITL$_{\diamond}^0$: + +(i) $\mathcal{O}\bigwedge\Gamma \leftrightarrow \bigwedge\mathcal{O}\Gamma$ + +(ii) $\mathcal{O}\bigvee\Gamma \leftrightarrow \bigvee\mathcal{O}\Gamma$ + +(iii) $\diamond\varphi \rightarrow \varphi \vee \mathcal{O}\diamond\varphi$. + +*Proof.* For the first two claims, one direction is obtained from repeated use of axioms (A3) or (A4) and the other is proven using (R2) and (A5); note that the second claim requires (A2) to treat the case when $\Gamma = \emptyset$. Details are left to the reader. + +For the third claim, reasoning within ITL0, note that $\varphi \to \diamond\varphi$ holds by (A6) and propositional reasoning, hence $\mathcal{O}\varphi \to \mathcal{O}\diamond\varphi$ by (R2), (A5) and (R1). In a similar way, $\mathcal{O}\diamond\varphi \to \diamond\varphi$ holds by (A6) and propositional reasoning, so $\mathcal{O}\mathcal{O}\diamond\varphi \to \mathcal{O}\diamond\varphi$ does by (R2), (A5) and (R1). Hence, $\mathcal{O}\varphi \vee \mathcal{O}\mathcal{O}\diamond\varphi \to \mathcal{O}\diamond\varphi$ holds. Using (A4) and some propositional reasoning we obtain $\mathcal{O}(\varphi \vee \mathcal{O}\diamond\varphi) \to \varphi \vee \mathcal{O}\diamond\varphi$. But then, by (R4), $\diamond(\varphi \vee \mathcal{O}\diamond\varphi) \to \varphi \vee \mathcal{O}\diamond\varphi$; since $\diamond\varphi \to \diamond(\varphi \vee \mathcal{O}\diamond\varphi)$ can be proven using (R3), we obtain $\diamond\varphi \to \varphi \vee \mathcal{O}\diamond\varphi$, as needed. □ + +For purposes of this discussion, a *logic* may be any set $\Lambda \subseteq L$, and we may write $\Lambda \vdash \varphi$ instead of $\varphi \in \Lambda$. Then, $\Lambda$ is *sound* for a class of structures $\Omega$ if, \ No newline at end of file diff --git a/samples/texts/5841221/page_6.md b/samples/texts/5841221/page_6.md new file mode 100644 index 0000000000000000000000000000000000000000..f049cb2a79af5e45ad1f4287b8ada798c40c17e9 --- /dev/null +++ b/samples/texts/5841221/page_6.md @@ -0,0 +1,54 @@ +whenever $\Lambda \vdash \varphi$, it follows that $\Omega \models \varphi$. The following is essentially proven in +[5]: + +**Theorem 2.3** *ITL0 is sound for the class of expanding posets.* + +Note however that a few of the axioms and rules have been modified to fall +within $\mathcal{L}_{\diamond}$, but these modifications are innocuous and their correctness may +be readily checked by the reader. We remark that, in contrast to Lemma 2.2, +$(\mathcal{O}p \rightarrow \mathcal{O}q) \rightarrow \mathcal{O}(p \rightarrow q)$ is not valid [2], hence by Theorem 2.3, it is not +derivable. + +**3 Labelled structures** + +The central ingredient of our completeness proof is given by non-deterministic +quasimodels, introduced by Fernández-Duque in the context of dynamic topo- +logical logic [10] and later adapted to intuitionistic temporal logic [15]. + +**3.1 Two-sided types** + +Quasimodels are structures whose worlds are labelled by types, as defined be- +low. More specifically, following [5], our quasimodels will be based on two-sided +types. + +**Definition 3.1** Let $\Sigma \subseteq \mathcal{L}_{\diamond}$ be closed under subformulas and $\Phi^{-}, \Phi^{+} \subseteq \Sigma$. +We say that the pair $\Phi = (\Phi^{-}; \Phi^{+})$ is a *two-sided $\Sigma$-type* if: + +(a) $\Phi^{-} \cap \Phi^{+} = \emptyset$, + +(b) $\Phi^{-} \cup \Phi^{+} = \Sigma$, + +(c) $\perp \notin \Phi^{+}$, + +(d) if $\varphi \wedge \psi \in \Sigma$, then $\varphi \wedge \psi \in \Phi^{+}$ if and only if $\varphi, \psi \in \Phi^{+}$, + +(e) if $\varphi \vee \psi \in \Sigma$, then $\varphi \vee \psi \in \Phi^{+}$ if and only if $\varphi \in \Phi^{+}$ or $\psi \in \Phi^{+}$, + +(f) if $\varphi \to \psi \in \Phi^{+}$, then either $\varphi \in \Phi^{-}$ or $\psi \in \Phi^{+}$, and + +(g) if $\diamond\varphi \in \Phi^{-}$ then $\varphi \in \Phi^{-}$. + +The set of two-sided Σ-types will be denoted TΣ. + +We will write $\Phi \preceq_T \Psi$ if $\Phi^+ \subseteq \Psi^+$ (or, equivalently, if $\Psi^- \subseteq \Phi^-$). If $\Sigma \subseteq \Delta$ +are both closed under subformulas, $\Phi \in T_\Sigma$ and $\Psi \in T_\Delta$, we will write $\Phi \subseteq_T \Psi$ +if $\Phi^- \subseteq \Psi^-$ and $\Phi^+ \subseteq \Psi^+$. + +Often (but not always) we will want $\Sigma$ to be finite, in which case given +$\Delta \subseteq \mathcal{L}_{\diamond}$ we write $\Sigma \supseteq \Delta$ if $\Sigma$ is finite and closed under subformulas. It is not +hard to check that $\prec_T$ is a partial order on $T_{\Sigma}$. Whenever $\Xi$ is an expression +denoting a two-sided type, we write $\Xi^-$ and $\Xi^+$ to denote its components. +Elements of $T_{\mathcal{L}_{\diamond}}$ are full types. Note that Fernández-Duque [15] uses one-sided +types, but it is readily checked that a one-sided $\Sigma$-type $\Phi$ as defined there +can be regarded as a two-sided type $\Psi$ by setting $\Psi^+ = \Phi$ and $\Psi^- = \Sigma \setminus \Phi$. +Henceforth we will refer to two-sided types simply as types. \ No newline at end of file diff --git a/samples/texts/5841221/page_7.md b/samples/texts/5841221/page_7.md new file mode 100644 index 0000000000000000000000000000000000000000..02f8f5842dfadb07ab455fae3e6430e226bbd745 --- /dev/null +++ b/samples/texts/5841221/page_7.md @@ -0,0 +1,41 @@ +## 3.2 Quasimodels + +Next we will define quasimodels; these are similar to models, except that valuations are replaced with a labelling function $\ell$. We first define the more basic notion of $\Sigma$-labelled frame. + +**Definition 3.2** Let $\Sigma \subseteq \mathcal{L}_{\diamond}$ be closed under subformulas. A $\Sigma$-labelled frame is a triple $\mathcal{F} = (|\mathcal{F}|, \ll_{\mathcal{F}}, \ell_{\mathcal{F}})$, where $\ll_{\mathcal{F}}$ is a partial order on $|\mathcal{F}|$ and $\ell_{\mathcal{F}}: |\mathcal{F}| \to T_{\Sigma}$ is such that + +(a) whenever $w \preceq_{\mathcal{F}} v$ it follows that $\ell_{\mathcal{F}}(w) \preceq_T \ell_{\mathcal{F}}(v)$, and + +(b) whenever $\varphi \to \psi \in \ell_{\mathcal{F}}^{-}(w)$, there is $v \succcurlyeq_{\mathcal{F}} w$ such that $\varphi \in \ell_{\mathcal{F}}^{+}(v)$ and $\psi \in \ell_{\mathcal{F}}^{-}(v)$. + +We say that $\mathcal{F}$ falsifies $\varphi \in \mathcal{L}_{\diamond}$ if $\varphi \in \ell^{-}(w)$ for some $w \in W$. + +As before, we may omit the subindexes in $\ll_{\mathcal{F}}$, $S_{\mathcal{F}}$ and $\ell_{\mathcal{F}}$ when $\mathcal{F}$ is clear from context. Labelled frames model only the intuitionistic aspect of the logic. For the temporal dimension, let us define a new relation over types. + +**Definition 3.3** Let $\Sigma \subseteq \mathcal{L}_{\diamond}$ be closed under subformulas. We define a relation $S_T \subseteq T_{\Sigma} \times T_{\Sigma}$ by $\Phi S_T \Psi$ iff for all $\varphi \in \mathcal{L}$: + +(a) if $\bigcirc \varphi \in \Phi^{+}$ then $\varphi \in \Psi^{+}$, + +(b) if $\bigcirc \varphi \in \Phi^{-}$ then $\varphi \in \Psi^{-}$, + +(c) if $\diamond \varphi \in \Phi^{+}$ and $\varphi \in \Phi^{-}$ then $\diamond \varphi \in \Psi^{+}$, and + +(d) if $\diamond \varphi \in \Phi^{-}$, then $\diamond \varphi \in \Psi^{-}$. + +Quasimodels are then defined as labelled frames with a suitable binary relation. + +**Definition 3.4** Given $\Sigma \subseteq \mathcal{L}_{\diamond}$ closed under subformulas, a $\Sigma$-quasimodel is a tuple $Q = (|Q|, \ll_Q, S_Q, \ell_Q)$ where $(|Q|, \ll_Q, \ell_Q)$ is a labelled frame and $S_Q$ is a binary relation over $|Q|$ that is + +(i) *serial*: for all $w \in |Q|$ there is $v \in |Q|$ such that $w S_Q v$; + +(ii) *forward-confluent*: if $w \preceq_Q w'$ and $w S_Q v$, there is $v'$ such that $v \preceq_Q v'$ and $w' S_Q v$; + +(iii) *sensible*: if $w S_Q v$ then $\ell_Q(w) S_T \ell_Q(v)$, and + +(iv) *$\omega$-sensible*: whenever $\diamond \varphi \in \ell_Q^+(w)$, there are $n \ge 0$ and $v$ such that $w S_Q^n v$ and $\varphi \in \ell_Q^+(v)$. + +A forward-confluent, sensible $\Sigma$-labelled frame is a weak $\Sigma$-quasimodel, and if $S_Q$ is a function we say that $Q$ is *deterministic*. + +We may write *quasimodel* instead of $\Sigma$-quasimodel when $\Sigma$ is clear from context, and *full quasimodel* instead of $\mathcal{L}_{\diamond}$-quasimodel. Similar conventions apply to labelled structures, weak quasimodels, etc. + +**Definition 3.5** Let $Q$ be a weak quasimodel and let $U$ be such that $U \subseteq |Q|$. \ No newline at end of file diff --git a/samples/texts/5841221/page_8.md b/samples/texts/5841221/page_8.md new file mode 100644 index 0000000000000000000000000000000000000000..a12a9937e5f3d054cc2cf8f22a7d1b93b33f051e --- /dev/null +++ b/samples/texts/5841221/page_8.md @@ -0,0 +1,33 @@ +Fig. 1. If S is forward-confluent, then the above diagram can always be completed. + +The restriction of $\mathcal{Q}$ with respect to $U$ is defined to be the structure + +$$ \mathcal{Q} \upharpoonright U = (|\mathcal{Q} \upharpoonright U|, \ll_{\mathcal{Q}\upharpoonright U}, S_{\mathcal{Q}\upharpoonright U}, \ell_{\mathcal{Q}\upharpoonright U}), $$ + +where: + +(i) $|\mathcal{Q} \upharpoonright U| = U$; + +(ii) $\ll_{\mathcal{Q}\upharpoonright U} = \ll_\mathcal{Q} \cap (U \times U)$; + +(iii) $S_{\mathcal{Q}\upharpoonright U} = S_\mathcal{Q} \cap (U \times U)$; + +(iv) $\ell_{\mathcal{Q}\upharpoonright U} = \ell_\mathcal{Q} \cap (U \times T_\Sigma)$. + +**Lemma 3.6** If $\mathcal{Q}$ is a weak quasimodel, $U \subseteq |\mathcal{Q}|$ is upward closed and $S_\mathcal{Q} \upharpoonright U$ is serial and $\omega$-sensible, then $\mathcal{Q} \upharpoonright U$ is a quasimodel. + +**Proof.** We must show that $\mathcal{Q}$ satisfies all properties of Definition 3.4. First we check that + +$$ (U, \ll_{\mathcal{Q}\upharpoonright U}, \ell_{\mathcal{Q}\upharpoonright U}) $$ + +is a labelled frame. The relation $\ll_{\mathcal{Q}\upharpoonright U}$ is a partial order, since restrictions of partial orders are partial orders. Similarly, if $x \ll_{\mathcal{Q}\upharpoonright U} y$ it follows that $x \ll_\mathcal{Q} y$, so that from the definition of $\ell_{\mathcal{Q}\upharpoonright U}$ it is easy to deduce that $\ell_{\mathcal{Q}\upharpoonright U}(x) \ll_T \ell_{\mathcal{Q}\upharpoonright U}(y)$. + +To check that condition (b) holds, let us take $x \in U$ and a formula $\varphi \to \psi \in \ell^-_{\mathcal{Q}\upharpoonright U}(x)$. By definition, $\varphi \to \psi \in \ell^-(x)$ so there exists $y \in |\mathcal{Q}|$ such that $x \ll_\mathcal{Q} y$, $\varphi \in \ell^+(y)$ and $\psi \in \ell^-(y)$. Since $U$ is upward closed then $y \in U$ and, by definition, $x \ll_{\mathcal{Q}\upharpoonright U} y$, $\varphi \in \ell^+_{\mathcal{Q}\upharpoonright U}(y)$ and $\psi \in \ell^-_{\mathcal{Q}\upharpoonright U}(y)$, as needed. + +Now we check that the relation $S_{\mathcal{Q}\upharpoonright U}$ satisfies (i)-(iv). Note that $S_{\mathcal{Q}\upharpoonright U}$ is serial and $\omega$-sensible by assumption and it is clearly sensible as $S_\mathcal{Q}$ was already sensible, so it remains to see that $S_{\mathcal{Q}\upharpoonright U}$ is forward-confluent. Take $x, y, z \in U$ such that $x \ll_{\mathcal{Q}\upharpoonright U} y$ and $x S_{\mathcal{Q}\upharpoonright U} z$. By definition $x \ll_\mathcal{Q} y$ and $x S_\mathcal{Q} z$. Since $S_\mathcal{Q}$ is confluent, there exists $t \in |\mathcal{Q}|$ such that $z \ll_\mathcal{Q} t$ and $y S_\mathcal{Q} t$. Since $U$ is upward closed $t \in U$ and, by definition, $y S_{\mathcal{Q}\upharpoonright U} t$ and $z \ll_{\mathcal{Q}\upharpoonright Ut}$. $\square$ + +The following result of [5] will be crucial for our completeness proof. + +**Theorem 3.7** A formula $\varphi \in L_\diamond$ is falsifiable over the class of expanding posets if and only if it is falsifiable over the class of finite, sub($\varphi$)-quasimodels. + +As usual, if $\varphi$ is not derivable, we wish to produce an expanding poset where $\varphi$ is falsified, but in view of Theorem 3.7, it suffices to falsify $\varphi$ on a quasimodel. This is convenient, as quasimodels are much easier to construct than models. \ No newline at end of file diff --git a/samples/texts/5841221/page_9.md b/samples/texts/5841221/page_9.md new file mode 100644 index 0000000000000000000000000000000000000000..91164a19eaa70efceee9f28a718e8f7130b65b2b --- /dev/null +++ b/samples/texts/5841221/page_9.md @@ -0,0 +1,27 @@ +# 4 The canonical model + +The standard canonical model for $\mathbb{ITL}_{\diamondsuit}^{0}$ is only a full, weak, deterministic quasi-model rather than a proper model. Nevertheless, it will be a useful ingredient in our completeness proof. Since we are working over an intuitionistic logic, the role of maximal consistent sets will be played by prime types, which we define below; recall that *full types* are elements of $T_{\mathcal{L}_{\diamondsuit}}$. + +**Definition 4.1** Given two sets of formulas $\Gamma$ and $\Delta$, we say that $\Delta$ is a consequence of $\Gamma$ (denoted by $\Gamma \vdash \Delta$) if there exist finite $\Gamma' \subseteq \Gamma$ and $\Delta' \subseteq \Delta$ such that $\mathbb{ITL}_{\diamondsuit}^{0} \vdash \Lambda \Gamma' \rightarrow \bigvee \Delta'$. + +We say that a pair of sets $\Phi = (\Phi^{-}, \Phi^{+})$ is *full* if $\Phi^{-} \cup \Phi^{+} = \mathcal{L}_{\diamondsuit}$, and *consistent* if $\Phi^{+} \not\vdash \Phi^{-}$. A full, consistent type is a *prime type*. The set of prime types will be denoted $T_{\infty}$. + +Note that we are using the standard interpretation of $\Gamma \vdash \Delta$ in Gentzen-style calculi. When working within a turnstyle, we will follow the usual proof-theoretic conventions of writing $\Gamma, \Delta$ instead of $\Gamma \cup \Delta$ and $\varphi$ instead of $\{\varphi\}$. Observe that there is no clash in terminology regarding the use of the word *type*: + +**Lemma 4.2** If $\Phi$ is a prime type then $\Phi$ is an $\mathcal{L}_{\diamondsuit}$-type. + +**Proof.** Let $\Phi$ be a prime type; we must check that $\Phi$ satisfies all conditions of Definition 3.1. Condition (b) holds by assumption, and conditions (a) and (c) follow from the consistency of $\Phi$. + +The proofs of the other conditions are all similar to each other. For example, for (f), suppose that $\varphi \to \psi \in \Phi^{+}$ and $\varphi \notin \Phi^{-}$. Since $\Phi$ is full, it follows that $\varphi \in \Phi^{+}$. But $(\varphi \land (\varphi \to \psi)) \to \psi$ is an intuitionistic tautology, so using the fact that $\Phi$ is consistent we see that $\psi \notin \Psi^{-}$, which once again using condition (b) gives us $\psi \in \Phi^{+}$. For condition (g) we use (A6): if $\diamondsuit\varphi \in \Phi^{-}$ and $\varphi \in \Phi^{+}$ we would have that $\Phi$ is inconsistent, hence $\varphi \in \Phi^{-}$. The rest of the conditions are left to the reader. $\square$ + +As with maximal consistent sets, prime types satisfy a Lindenbaum property. + +**Lemma 4.3 (Lindenbaum Lemma)** Let $\Gamma$ and $\Delta$ be sets of formulas. If $\Gamma \not\vdash \Delta$ then there exists a prime type $\Phi$ such that $\Gamma \subseteq \Phi^{+}$ and $\Delta \subseteq \Phi^{-}$. + +**Proof.** The proof is standard, but we provide a sketch. Let $\varphi \in \mathcal{L}_{\diamondsuit}$. Note that either $\Gamma, \varphi \not\vdash \Delta$ or $\Gamma \not\vdash \Delta, \varphi$, for otherwise by a cut rule (which is intuitionistically derivable) we would have $\Gamma \vdash \Delta$. Thus we can add $\varphi$ to $\Gamma \cup \Delta$, and by repeating this process for each element of $\mathcal{L}_{\diamondsuit}$ (or using Zorn's lemma) we can find suitable $\Phi$. $\square$ + +Given a set $A$, let $\mathbb{I}_A$ denote the identity function on $A$. The canonical model $\mathcal{M}_c$ is then defined as the labelled structure + +$$ +\mathcal{M}_c = (|\mathcal{M}_c|, \ll_c, S_c, l_c) \stackrel{\text{def}}{=} (T_{\mathcal{L}\diamond}, \ll_T, S_T, \mathbb{I}_{T_\infty}) \upharpoonright T_\infty; +$$ \ No newline at end of file diff --git a/samples/texts/6084721/page_1.md b/samples/texts/6084721/page_1.md new file mode 100644 index 0000000000000000000000000000000000000000..ba026a60226bec7e9cfa9a8b772e2cecc79da474 --- /dev/null +++ b/samples/texts/6084721/page_1.md @@ -0,0 +1,18 @@ +Uniform Convergence of Semimartingales +and Minimum Distance Estimators + +C. Sibeux. Laboratoire Statistique et Processus, Département de Mathématiques, Université d'Angers, 2 Boulevard Lavoisier, 49045 Angers cedex, FRANCE + +L. Vostrikova. Laboratoire Statistique et Processus, Département de Mathématiques, Université d'Angers, 2 Boulevard Lavoisier, 49045 Angers cedex, FRANCE + +Abstract + +We consider a sequence of special semimartingales $X^{n,\beta} = (X_l^{n,\beta})_{l\ge 0}$ depending on the parameter $\theta \in \Theta \subseteq \mathbb{R}^m$ and give conditions, expressed in terms of the predictable characteristics of $X^{n,\beta}$, for the uniform in $\theta$, weak convergence of semimartingales $X^{n,\beta}$. Using results about uniform convergence we study the minimal distance estimators of $\theta$ for semimartingales. + +**Key words :** uniform weak convergence, semimartingale, predictable characteristic, minimum distance estimator, consistency, asymptotic normality. + +AMS classification : 60F17, 62F12. + +# 1 Introduction + +Uniform (with respect to the parameter) weak convergence of stochastic processes plays an important role in statistics, for instance in studying of the weak convergence of the maximum likelihood estimators, bayesian estimators, minimum distance estimators, for obtaining the minimax theorems for estimators \ No newline at end of file diff --git a/samples/texts/6084721/page_10.md b/samples/texts/6084721/page_10.md new file mode 100644 index 0000000000000000000000000000000000000000..414c76b73965596fd941663b94b01ce899f8080e --- /dev/null +++ b/samples/texts/6084721/page_10.md @@ -0,0 +1,51 @@ +$$ +\langle M^{n,\theta} \rangle_t = \langle X^{n,\theta,c} \rangle_t + \int_0^t \int_{\mathbb{R}\setminus\{0\}} x^2 d\nu^{n,\theta} - \sum_{0_t - _t| \\ +&\le |\int_0^t \int_{|x|>a} x^2 d\nu^{n,\theta} - \sum_{0a} x\nu^{n,\theta}(\{s\},dx) \\ +&\quad \times [2\int_{|x|\le a} x\nu^{n,\theta}(\{s\},dx) + \int_{|x|>a} x\nu^{n,\theta}(\{s\},dx)]| \\ +&\le 4 \int_0^t \int_{|x|>a} x^2 d\nu^{n,\theta}. +\end{align*} +$$ + +and by condition 2) the right-hand side of this inequality tends to zero in +$P^n$-probability . This means that condition 3) is satisfied when we replace +$_t$ by $_t$, i.e. the condition $U_4^i$ holds. + +Finally, all the conditions of theorem 1 of Liptser, Shiryayev [21], p.608- +609 are satisfied and we have + +$$ +X^{n, \theta} \xrightarrow{w(P^n)} X^{\theta} +$$ + +for every $\theta \in \mathbb{R}^m$ in the Skorohod space $D(\mathcal{R}^+, \mathcal{R})$. + +To prove the convergence of the finite-dimensional distributions we show +that + +$$ +(X^{n,\theta_1}, X^{n,\theta_2}, \dots, X^{n,\theta_l}) \xrightarrow{w(P^n)} (X^{\theta_1}, X^{\theta_2}, \dots, X^{\theta_l}) +$$ + +in the Skorohod space D(IIR+, IIR^l). In turn, according to the Kramer-Wold +principle (see [2], p. ), this convergence is equivalent to + +$$ +\sum_{i=1}^{l} c_i X^{n, \theta_i} \xrightarrow{w(P^n)} \sum_{i=1}^{l} c_i X^{\theta_i} \qquad (8) +$$ + +for every $c_i \in \mathbb{R}$ and $\theta_i \in \mathbb{R}^m$. Without loss of generality we can and will +suppose that $c_i \neq 0$ for all $1 \le i \le l$. + +We notice that the process $\sum_{i=1}^l c_i X^{n, \theta_i}$ with arbitrary $c_i \in \mathbb{R}$ and $\theta_i \in \mathbb{R}^m$ is a special locally square-integrable semimartingale with the decomposition + +$$ +\sum_{i=1}^{l} c_i X_{t}^{n, \theta_i} = \sum_{i=1}^{l} c_i X_{0}^{n, \theta_i} + \sum_{i=1}^{l} c_i A_{t}^{n, \theta_i} + \sum_{i=1}^{l} c_i M_{t}^{n, \theta_i}, +$$ \ No newline at end of file diff --git a/samples/texts/6084721/page_11.md b/samples/texts/6084721/page_11.md new file mode 100644 index 0000000000000000000000000000000000000000..9d383fdc853374fe7d371c1c0d260144c148c058 --- /dev/null +++ b/samples/texts/6084721/page_11.md @@ -0,0 +1,25 @@ +and $\sum_{i=1}^{l} c_i X^{\theta_i}$ is continuous with the following decomposition + +$$ \sum_{i=1}^{l} c_i X_l^{\theta_i} = \sum_{i=1}^{l} c_i X_0^{\theta_i} + \sum_{i=1}^{l} c_i A_l^{\theta_i} + \sum_{i=1}^{l} c_i M_l^{\theta_i}. $$ + +To prove a weak convergence (8) we can verify the conditions of group 1 for $\sum_{i=1}^{l} c_i X^{n, \theta_i}$ and $\sum_{i=1}^{l} c_i X^{\theta_i}$ and show that the triplet of the semimartingale $\sum_{i=1}^{l} c_i X^{\theta_i}$ satisfies the conditions of the type (2), (3), (4) and (5). + +Condition 1) is clearly verified since + +$$ \sum_{i=1}^{l} c_i X_0^{n, \theta_i} \xrightarrow{P^n} \sum_{i=1}^{l} c_i X_0^{\theta_i}. $$ + +To verify condition 2) of group 1 we denote by $\nu^{n, \theta_1, \theta_2, \dots, \theta_l}$ the compensator of the jump measure of $(X^{n, \theta_1}, \dots, X^{n, \theta_l})$. Since + +$$ |c_1 x_1 + c_2 x_2 + \dots + c_l x_l| \le |x| |c| $$ + +where $x = (x_1, x_2, \dots, x_l)$. $c = (c_1, c_2, \dots, c_l)$, and $|\cdot|$ is a euclidean norm. We have + +$$ \begin{aligned} & \int_0^t \int_{|c_1 x_1 + c_2 x_2 + \dots + c_l x_l| > a} |c_1 x_1 + c_2 x_2 + \dots + c_l x_l|^2 d\nu^{n, \theta_1, \theta_2, \dots, \theta_l} \\ & \le |c|^2 \int_0^t \int_{|x| > a/|c|} |x|^2 d\nu^{n, \theta_1, \theta_2, \dots, \theta_l} \le |c|^2 \sum_{i=1}^l \int_0^t \int_{|x| > a/|c|} x_i^2 d\nu^{n, \theta_1, \theta_2, \dots, \theta_l} \end{aligned} \quad (9) $$ + +Noting that for every $a > 0$ + +$$ \{x = (x_1, x_2, \dots, x_l) : |x| > a\} \subset \bigcup_{i=1}^{l} \{x = (x_1, x_2, \dots, x_l) : |x_i| > \frac{a}{l}\} $$ + +and we obtain + +$$ \begin{aligned} & \int_0^t \int_{|x|>a} x_i^2 d\nu^{n,\theta_1,\theta_2,\dots,\theta_l} \\ & \leq \sum_{j=1}^l \int_0^t \int_{|x_j|>a/l} x_j^2 d\nu^{n,\theta_1,\theta_2,\dots,\theta_l} \\ & \leq 2t \sum_{j=1}^l \int_0^t \int_{|x_j|>a/l} x_j^2 d\nu^{n,\theta_j}. \end{aligned} $$ \ No newline at end of file diff --git a/samples/texts/6084721/page_12.md b/samples/texts/6084721/page_12.md new file mode 100644 index 0000000000000000000000000000000000000000..909a60b82a71e07ac129b208cbd2785d892e5be5 --- /dev/null +++ b/samples/texts/6084721/page_12.md @@ -0,0 +1,44 @@ +Since each term on the right-hand side of (9) tends to zero in $P^n$-probability, +condition 2) of group 1 for the semimartingale $\sum_{i=1}^l c_i X^{n, \theta_i}$ is verified. +To obtain condition 3) of group 1 for $\sum_{i=1}^l c_i X^{n, \theta_i}$ we notice that + +$$ +\begin{align*} +& P^n(\sup_{t \le L} |\sum_{i=1}^l c_i A_t^{n, \theta_i} - \sum_{i=1}^l c_i \int_0^t a^{\theta_i}(s, X^{n, \theta_i}) du_s| \ge \epsilon) \\ +& \le \sum_{i=1}^l P^n(\sup_{t \le L} |A_t^{n, \theta_i} - \int_0^t a^{\theta_i}(s, X^{n, \theta_i}) du_s| \ge \frac{\epsilon}{|c_i|}) +\end{align*} +$$ + +and we use condition 3) of group 1 for each semimartingale $X^{n, \theta_i}$. To verify +the condition 4) for $\sum_{i=1}^{l} c_i X^{n, \theta_i}$ we write + +$$ +\begin{align*} +\langle \sum_{i=1}^{l} c_i M^{n, \theta_i} \rangle_l &= \sum_{i=1}^{l} \sum_{j=1}^{l} c_i c_j \langle M^{n, \theta_i}, M^{n, \theta_j} \rangle_l, \\ +\langle \sum_{i=1}^{l} c_i M^{\theta_i} \rangle_l &= \sum_{i=1}^{l} \sum_{j=1}^{l} c_i c_j \langle M^{\theta_i}, M^{\theta_j} \rangle_l. +\end{align*} +$$ + +so that condition 1) for $\sum_{i=1}^l c_i X^{n,\theta_i}$ follows from the corresponding conditions +1) and 5) of groups 1 and 2 respectively. + +Now we verify the conditions of the type (2), (3), (4), (5), and (6) for the +semimartingales $\sum_{i=1}^l c_i X^{n,\theta_i}$. These are satisfied because + +$$ +\sum_{i=1}^{l} c_i A_t^{\theta_i}(X^i) = \int_0^l (\sum_{i=1}^{l} c_i a^{\theta_i}(s, X^i)) du_s, +$$ + +$$ +\langle \sum_{i=1}^{l} c_i M^{\theta_i} \rangle_l = \int_0^l \left( \sum_{i=1}^{l} c_i^2 e^{\theta_i}(s, X^i) \right) + \sum_{i \neq j} c_i c_j d^{\theta_i, \theta_j}(s, X^i, X^j) du_s. +$$ + +In addition we have + +$$ +|\sum_{i=1}^{l} c_i A_t^{\theta_i}(X^i)| \leq \sum_{i=1}^{l} |c_i| L(t) (1 + \sup_{s 0$ such that for every bounded $\ell^p$-stopping time +$\tau$ and $t \ge 0$ + +$$ +\begin{align} +& L^n | X_{\tau \wedge t}^{n, \beta} - X_{\tau \wedge t}^{n, \beta'} |^p \le c(p) \{ L^n | X_0^{n, \beta} - X_0^{n, \beta'} |^p \nonumber \\ +& \phantom{L^n | X_{\tau \wedge t}^{n, \beta} - X_{\tau \wedge t}^{n, \beta'} |^p \le } + L^n | A_{\tau \wedge t}^{n, \beta} - A_{\tau \wedge t}^{n, \beta'} |^p + L^n (< X^{n, \beta,c} - X^{n, \beta',c} >_{\tau \wedge t}^{p/2}) \nonumber \\ +& \phantom{L^n | X_{\tau \wedge t}^{n, \beta} - X_{\tau \wedge t}^{n, \beta'} |^p \le } + L^n [\int_0^{\tau \wedge t} \int_{\mathbb{R}^2 \setminus \{0,0\}} (x-y)^2 d\nu^{n,\beta,\beta'} ]^{p/2} + L^n \int_0^{\tau \wedge t} \int_{\mathbb{R}^2 \setminus \{0,0\}} |x-y|^p d\nu^{n,\beta,\beta'}]. \tag{10} +\end{align} +$$ + +From the semimartingale decomposition + +$$ +X_{\tau \wedge t}^{n, \beta} = X_0 + A_{\tau \wedge t}^{n, \beta} + M_{\tau \wedge t}^{n, \beta} +$$ + +and the inequality $(a+b)^p \le 2^p (a^p + b^p)$, verified for all $a,b \ge 0$ and $p>0$, +we conclude that + +$$ +\begin{equation} +\begin{split} +& L^n | X_{\tau \wedge t}^{n, \beta} - X_{\tau \wedge t}^{n, \beta'} |^p \le c(p) \{ L^n | X_0^{n, \beta} - X_0^{n, \beta'} |^p \\ +& \qquad + L^n | A_{\tau \wedge t}^{n, \beta} - A_{\tau \wedge t}^{n, \beta'} |^p + L^n | M_{\tau \wedge t}^{n, \beta} - M_{\tau \wedge t}^{n, \beta'} |^p \}. +\end{split} +\tag{11} +\end{equation} +$$ + +Since $(M_{\tau \wedge t}^{n, \beta} - M_{\tau \wedge t}^{n, \beta'})_{t \ge 0}$ is a locally square-integrable martingale, then the estimation of Valkeila, Dzaparidze [4] implies that for $p \ge 2$ + +$$ +\begin{equation} +\begin{aligned} +& L^n | M_{\tau \wedge t}^{n, \beta} - M_{\tau \wedge t}^{n, \beta'} |^p \le c(p) \{ L^n < M^{n, \beta} - M^{n, \beta'} >_{\tau \wedge t}^{p/2} \\ +& \qquad + L^n \int_0^{\tau \wedge t} \int_{\mathbb{R}^2 \setminus \{0,0\}} |x-y|^p d\nu^{n, \beta, \beta'} \}. +\end{aligned} +\tag{12} +\end{equation} +$$ + +where $c(p)$ is a constant, depending on $p$. + +Noting that + +$$ +\begin{align*} +& < M^{n,\theta} - M^{n,\theta'} >_{\tau\wedge t} = < X^{n,\theta,c}_{} - X^{n,\theta',c}_{>}_{\tau\wedge t} \\ +& + \int_0^{\tau\wedge t} \int_{\mathbb{R}^2\setminus\{0,0\}} (x-y)^2 d\nu^{n,\theta,\theta'} - \sum_{0}_{\tau\wedge t} + 2\int_0^{\tau\wedge t} \int_{\mathbb{R}^2\setminus\{0,0\}} (x-y)^2 d\nu^{n,\theta,\theta'}. +\tag{13} +\end{align*} +$$ + +The relations (11), (12) and (13) imply the inequality (10) for some constant +$c(p)$ depending on $p$. \ No newline at end of file diff --git a/samples/texts/6084721/page_14.md b/samples/texts/6084721/page_14.md new file mode 100644 index 0000000000000000000000000000000000000000..b205098f0d6a8220290158e295b55bad73690375 --- /dev/null +++ b/samples/texts/6084721/page_14.md @@ -0,0 +1,37 @@ +To verify condition 2) of theorem 2 we have to prove that + +$$ +\limsup_{h \to 0} P^n \left( \sup_{0 \le t \le T} \sup_{\substack{|e-θ'| \le h \\ e, θ' \in K_i}} |X_t^{n,θ} - X_t^{n,θ'}| \ge \epsilon \right) = 0 \quad (14) +$$ + +for all $\epsilon > 0$ and $T > 0$. For this we introduce a stopping time + +$$ +\tau = \inf\{0 \le t \le T : \sup_{\substack{|e-θ'| \le h \\ e,θ' \in K_i}} |X_t^{n,θ} - X_t^{n,θ'}| \ge \epsilon\} +$$ + +with the convention that $\inf(\emptyset) = T$. Then + +$$ +P^n \left( \sup_{0 \le t \le T} \sup_{\substack{|e-θ'| \le h \\ e,θ' \in K_i}} |X_t^{n,θ} - X_t^{n,θ'}| \ge \epsilon \right) \le P^n \left( \sup_{\substack{|e-θ'| \le h \\ e,θ' \in K_i}} |X_τ^{n,θ} - X_τ^{n,θ'}| \ge \epsilon \right). (15) +$$ + +To estimate the right-hand side of the inequality (15) we use the lemma 7.4 p. 1000 of Pfaff [19]. We then note that the conditions of group 3 and the inequality (10) give + +$$ +L^m |X_{\tau \wedge t}^{n, \beta} - X_{\tau \wedge t}^{n, \beta'}|^\nu \le c(p) |\theta - \theta'|^\alpha +$$ + +for every bounded $L^m$-stopping time $\tau$ and $t \ge 0$. From this lemma it follows that + +$$ +P^n \left( \sup_{\substack{|e-θ'| \le h \\ e,θ' \in K_i}} |X_{τ∧t}^{n,θ} - X_{τ∧t}^{n,θ'}| \geq \epsilon \right) \leq c(p, \epsilon)h^{\alpha-m} \quad (16) +$$ + +where $c(p, \epsilon)$ is some constant independent on $n$. Taking $\sup_n$ and $\lim_{h\to 0}$ and choosing $t$ big enough to ensure that $\tau \wedge t = \tau$, we obtain condition 2) of theorem 2. + +It remains to show how one can construct the modifications of the processes $X^n$ and $X$ with values in $D(L^+; C_{loc})$. Repeating the same arguments as above, but with the rationals $\theta$ and $\theta'$, we obtain the estimation (16) and, hence, (14). Then taking the rationals $\theta \in Q(L^m)$ we set for $\theta_0 \in L^m \setminus Q(L^m)$ and $t \ge 0$ + +$$ +X_t^{n,\theta_0} = \lim_{\theta' \to \theta_0} X_t^{n,\theta}. +$$ \ No newline at end of file diff --git a/samples/texts/6084721/page_15.md b/samples/texts/6084721/page_15.md new file mode 100644 index 0000000000000000000000000000000000000000..47dc32ef6f5a724579b2f0792b8c00937646f479 --- /dev/null +++ b/samples/texts/6084721/page_15.md @@ -0,0 +1,27 @@ +Finally, we verify that for each $\epsilon > 0$, $\theta_0 \in \mathbb{R}^m$ and $N \in \mathbb{N}$ + +$$ \lim_{h \to 0} P^n(W_h^N(X^{n,\theta_0}) \ge \epsilon) = 0. \quad (17) $$ + +In fact, this is true for $\theta_0 \in Q(\mathbb{R}^m)$, and for irrational $\theta_0$ we choose $\theta \in Q(\mathbb{R}^m)$ such that $|\theta_0 - \theta| \le h$ and write + +$$ W_h^N(X^{n,\theta_0}) \le W_h^{N}(X^{n,\theta_0}) + 2V_h^{N,i}(X^n). $$ + +This inequality together with (14) and (17) gives the result. $\square$ + +# 4 Consistency and weak convergence of the minimal distance estimators. + +We suppose that on the filtered probability space $(\Omega, \mathcal{F}, \mathbb{I}^{\cdot}, P^{\cdot})$ we are given a sequence of special square-integrable semimartingales $X^{\cdot, \theta} = (X_l^{\cdot, \theta})_{l \ge 0}$ depending on the parameter $\theta \in \Theta \subset \mathbb{R}^m$ and having the decomposition (1). We suppose that $\Theta$ is bounded closed convex set. + +We assume that $X^\theta = (X_l^\theta)_{l \ge 0}$ is a deterministic function with paths in $D(\mathbb{R}^+; \mathbb{I}\mathbb{R})$ having a locally bounded variation. This function will play the role of the "limit" trajectory as $\epsilon \to 0$ of the process $X^{\epsilon, \theta}$. + +We suppose that the space of paths $D([0, T]; \mathbb{I}\mathbb{R})$ with $T > 0$ is endowed with a norm $||\cdot||_T$, bounded above by the corresponding supremum norm and that for every function $X \in D(\mathbb{R}^+, \mathbb{I}\mathbb{R})$, $||X||_T$ is a non-decreasing right-continuous function of $T$. + +We suppose further that the "limit" paths are distinguishable in the norm $||\cdot||$ i.e. for all $T > 0$, $\theta, \theta' \in \Theta$, $\theta \neq \theta'$ we have + +$$ ||X_{\cdot}^{\theta} - X_{\cdot}^{\theta'}||_{T} > 0. $$ + +and that they are continuous in this norm, i.e. $||X_{\cdot}^{\theta} - X_{\cdot}^{\theta'}||_T$ tends to zero as $\theta' \to \theta$ for every $\theta \in \Theta$. + +We assume that the true value of the parameter is $\theta_0$, $\theta_0 \in \text{int}(\Theta)$, and we define the minimal distance estimator $\hat{\theta}_T^c$ corresponding to observations on the interval $[0, T]$ as + +$$ \hat{\theta}_T^c = \arg \inf_{\theta \in \Theta} ||X^{\cdot, \theta_0} - X^\theta||_T. $$ \ No newline at end of file diff --git a/samples/texts/6084721/page_16.md b/samples/texts/6084721/page_16.md new file mode 100644 index 0000000000000000000000000000000000000000..c4337f77166e4cb0468fff12da301e0ef08a2837 --- /dev/null +++ b/samples/texts/6084721/page_16.md @@ -0,0 +1,35 @@ +In the following theorem we give conditions under which the estimator $\hat{\theta}_T$ is consistent. + +**Theorem 4** We assume that for $T > 0$ as $\epsilon \to 0$ + +$$1) ||A^{e,\theta_0} - X^{\theta_0}||_T \xrightarrow{P^\epsilon} 0,$$ + +$$2) _T \xrightarrow{P^\epsilon} 0.$$ + +Then the estimator $\hat{\theta}_T$ is a consistent estimator of $\theta_0$. + +**Proof.** We consider the random elements: $Y_T^{e,\theta_0} = ||X^{e,\theta_0} - X^\theta||_T$ and $Y_T^{\theta_0} = ||X^{\theta_0} - X^\theta||_T$. It is easy to see that for each $\theta, \theta'$ we have + +$$|Y_T^{e,\theta} - Y_T^{e,\theta'}| \le ||X^{\theta} - X^{\theta'}||_T, \quad |Y_T^{\theta} - Y_T^{\theta'}| \le ||X^{\theta} - X^{\theta'}||_T$$ + +and, since in the norm $||\cdot||_T$ the limit paths are continuous in $\theta$, $Y_T^{e,\theta}$ and $Y_T^{\theta}$ are in $C(\Theta)$. + +We will now show that the laws of $Y_T^e = (Y_T^{e,\theta})_{\theta \in \Theta}$ converge in $C(\Theta)$ to the law of $Y_T = (Y_T^{\theta})_{\theta \in \Theta}$. For this, using a semimartingale decomposition, we can write that + +$$\sup_{\theta \in \Theta} |Y_T^{e,\theta} - Y_T^{\theta}| \le ||X^{e,\theta_0} - X^{\theta_0}||_T \le (18) \\ \le ||A^{e,\theta_0} - X^{\theta_0}||_T + ||M^{e,\theta_0}||_T \le ||A^{e,\theta_0} - X^{\theta_0}||_T + \sup_{0 \le t \le T} |M_t^{e,\theta_0}|$$ + +From the Lenglart-Rebolledo inequality ([21], p. 46 )we have for each +$a > 0, b > 0$ + +$$P^e(\sup_{0 \le t \le T} |M_t^{e, \theta_0}| > a) \le b/a^2 + P^e(_T > b). \quad (19)$$ + +This inequality (19) together with (18) and the conditions 1) and 2) of the +theorem imply that + +$$\sup_{\theta \in \Theta} |Y_T^{e,\theta} - Y_T^{\theta}| \xrightarrow{P^\epsilon} 0$$ + +which establishes the needed convergence. + +Now consider the functional $\phi$ such that + +$$\phi(Y_T^e) = \arg \inf_{\theta \in \Theta} Y_T^{e, \theta}.$$ \ No newline at end of file diff --git a/samples/texts/6084721/page_17.md b/samples/texts/6084721/page_17.md new file mode 100644 index 0000000000000000000000000000000000000000..d7cc4d8b0501adeac6e868fc9174fca55f254883 --- /dev/null +++ b/samples/texts/6084721/page_17.md @@ -0,0 +1,27 @@ +From the condition of distinguishability we see that $\phi(Y_T) = \arg\inf_{\theta \in \Theta} ||X^{\theta_0} - X^{\theta}||_T = \theta_0$ is unique. Hence, $\phi$ is continuous and the estimator + +$$ \hat{\theta}_T^c = \arg \inf_{\theta \in \Theta} ||X^{\epsilon, \theta_0} - X^{\theta}||_T $$ + +converges to $\theta_0$ in law, and, also in probability. $\square$ + +To study a weak convergence of normalized minimum distance estimators + +$$ \hat{\beta}_T^c = \frac{1}{\epsilon}(\hat{\theta}_T^c - \theta_0) $$ + +we introduce a new parameter $\beta = (\theta - \theta_0)/\epsilon$ and set $\tilde{X}^{\epsilon, \theta_0} = (\tilde{X}_t^{\epsilon, \theta_0})_{t \ge 0}$ with + +$$ \tilde{X}_t^{\epsilon, \theta_0} = (X_t^{\epsilon, \theta_0} - X_t^{\theta_0})/\epsilon. $$ + +We also introduce a continuous gaussian process $l^{(\theta_0)} = (l_t^{(\theta_0)})_{t \ge 0}$ playing the role of the limit process for $\tilde{X}^{\epsilon, \theta_0}$, and having the decomposition + +$$ l_t^{(\theta_0)} = B_t^{(\theta_0)} + N_t^{(\theta_0)} $$ + +where $B_t^{(\theta_0)}$ is the predictable process of locally integrable variation and $N_t^{(\theta_0)}$ is a local martingale. + +We suppose that the solution of the corresponding semimartingale problem is unique and that there exist predictable functions $b^{(\theta_0)}(t, X), c^{(\theta_0)}(t, X)$ with the modulus integrable with respect to some continuous increasing deterministic function $(u_s)$ of locally bounded variation such that + +$$ B_t^{(\theta_0)}(X) = \int_0^t b^{(\theta_0)}(s, X) du_s, \\ _l(X) = \int_0^t c^{(\theta_0)}(s, X) du_s. $$ + +We assume that the functions $b^{(\theta_0)}(t, X)$ and $c^{(\theta_0)}(t, X)$ are continuous in the Skorohod metric and they verify + +$$ |b^{(\theta_0)}(t, X)| \le L(t)(1 + \sup_{s 0$ and $T > 0$ + +$$1) (X_0^{\epsilon, \theta_0} - X_0^{\tilde{\theta}_0}) / \epsilon \xrightarrow{P^\epsilon} 0,$$ + +$$2) \overline{\lim}_{\epsilon \to 0} P^{\epsilon} \left( \int_{0}^{T} \int_{|x|>r} |x|^2 d\tilde{V}^{\epsilon, \theta_0} \geq \delta \right) = 0, \quad \forall r > 0,$$ + +$$3) \overline{\lim}_{\epsilon \to 0} P^{\epsilon} \left( \sup_{t \le T} \left| \frac{A_t^{\epsilon, \theta_0} - X_t^{\theta_0}}{\epsilon} - \int_0^t b(s, \tilde{X}^{\epsilon, \theta_0}) du_s \right| \ge \delta \right) = 0,$$ + +$$4) \overline{\lim}_{\epsilon \to 0} P^{\epsilon} \left( \sup_{t \le T} \left| \frac{_t}{\epsilon^2} - \int_0^t c(s, \tilde{X}^{\epsilon, \theta_0}) du_s \right| \ge \delta \right) = 0.$$ + +5) there exists a function $\tilde{X}^{\theta_0} \in D(\mathcal{R}^+; \mathbb{R})$ such that as $|\Delta| \to 0$ + +$$\frac{||\tilde{X}^{\theta_0+\Delta} - \tilde{X}^{\theta_0} - {}^\top\Delta \tilde{X}^{\theta_0}||_T}{|\Delta|} \to 0.$$ + +6) the strong identifiability condition that + +$$\lim_{L \to +\infty} \lim_{\epsilon \to 0} \inf_{|\beta|>L} ||\frac{1}{\epsilon}(X^{\beta_0+\epsilon\beta} - X^{\beta_0})||_T = +\infty,$$ + +7) $\arg \inf_{\beta \in R^m} ||U^{\beta_0} - {}^\top\beta X^{\beta_0}||_T$ is unique. + +Then there is a weak convergence of the normalized estimators $\hat{\beta}_T^*$ to the +estimator $\hat{\beta}_T$ defined by + +$$\hat{\beta}_T = \arg \inf_{\beta \in R^m} ||U^{\beta_0} - {}^\top\beta X^{\beta_0}||_T$$ \ No newline at end of file diff --git a/samples/texts/6084721/page_19.md b/samples/texts/6084721/page_19.md new file mode 100644 index 0000000000000000000000000000000000000000..f41caaa53263a7351902ee845acba3fbf4a2a974 --- /dev/null +++ b/samples/texts/6084721/page_19.md @@ -0,0 +1,38 @@ +**Remark 1.** To clarify the sense of the conditions of this theorem, we introduce the processes $Y^{\epsilon,\beta} = (Y_t^{\epsilon,\beta})_{t\ge0}$ with + +$$ +Y_t^{\epsilon,\beta} = \left\| \frac{1}{\epsilon} (X^{\epsilon,\theta_0} - X^{\theta_0+\epsilon,\beta}) \right\|_{\ell} \quad (20) +$$ + +and the process $Y^\beta = (Y_t^\beta)_{t\ge0}$ with + +$$ +Y_t^\beta = \| (t^{\beta_0} - \beta) \mathring{X}^{\beta_0} \|_t. \qquad (21) +$$ + +One can see that the conditions 1) - 5) assure a weak convergence of the processes $Y^e = (Y^{e,\beta})_{\beta \in \mathbb{R}^m}$ to the process $Y = (Y^\beta)_{\beta \in \mathbb{R}^m}$ in the space $\mathcal{D}(\mathfrak{R}^+, C_{\text{loc}})$. Condition 1) assures the convergence of the initial values, condition 2) is a sort of Lindenberg condition and conditions 3) and 4) are the conditions of differentiability of $A^{e,\theta_0}$ and $\langle M^{e,\theta_0} \rangle$ in the sense of the probability distance furnished with the sup in t norm. In turn, conditions 6) and 7) assure the continuity of the functional arg inf. + +We continue with a discussion of the strong identifiability condition. + +**Lemma 1** We suppose that the limit trajectory $X^\beta$ is differentiable in $\theta_0$ with respect to the norm $||\cdot||_T$. Then the following two conditions are equivalent: + +1) the strong identifiability condition that + +$$ +\lim_{L \to +\infty} \lim_{\epsilon \to 0} \inf_{|\beta|>L} ||\frac{1}{\epsilon}(X^{\theta_0+\epsilon, \beta} - X^{\theta_0})||_T = +\infty. +$$ + +2) the identifiability condition that for every $\nu > 0$, $\inf_{|\tilde{\beta}-\tilde{\beta}_0|>\nu} ||X^\tilde{\beta}-X^{\tilde{\beta}_0}||_T > 0$ +and the non-singularity condition that there exists $c > 0$ such that for all +$b \in \mathbb{R}^m$, $||(X^{\tilde{\beta}_0}, b)||_T \ge c|b|$. + +**Proof.** Suppose that condition 1) is satisfied and the non-singularity condition is not. Then we can find a sequence of $b_n$ with $|b_n| = 1$ such that +$$ +||(X^{\tilde{\beta}_0, b_n})||_T \to 0 \text{ as } n \to \infty. \text{ Let } L > 0. \text{ Then there exists } n_0 \text{ such that for} +$$ +all $n \ge n_0$ we have $||(X^{\tilde{\beta}_0, b_n})||_T \le 1/L$. Condition 1) applied to $\beta = 2Lb_n$ +gives + +$$ +\lim_{L \to +\infty} 2L \lim_{\epsilon \to 0} \frac{||X^{\tilde{\beta}_0 + 2Lb_n, \epsilon} - X^{\tilde{\beta}_0}||_T}{2L\epsilon} = +\infty +$$ \ No newline at end of file diff --git a/samples/texts/6084721/page_2.md b/samples/texts/6084721/page_2.md new file mode 100644 index 0000000000000000000000000000000000000000..c96087c1eac8ab7e913275cbb63935d197dae487 --- /dev/null +++ b/samples/texts/6084721/page_2.md @@ -0,0 +1,15 @@ +etc. We mention the paper of Jacod, Mano [9] where the uniform convergence of the processes was equivalent to the uniform convergence of the Prohorov distances between them, the book of Shiryayev, Spokoinij [24] where the notion of the $\lambda$-convergence was introduced, useful for the minimax theorems of estimators, as well as the papers of Vostrikova [26], [27] where a weak convergence of the likelihood ratio processes was studied in the Skorohod space $D(\mathbb{R}^+, C_{\text{loc}})$ with the values in the space of the continuous functions endowed with the locally-uniform metric. + +We can often suppose that we observe the sequence of special semimartingales $X^{n,\theta} = (X_t^{n,\theta})_{t \ge 0}$ depending on the parameter $\theta \in \Theta \subseteq \mathbb{R}^m$. For instance, if we consider the empirical process for the first [nt] observations of a sequence of independent random variables $(X_k)_{k \ge 1}$ having the same distribution function $I_\theta$, then the process $X^{n,\theta}$ will be given by + +$$X_t^{n,\theta}(u) = \frac{1}{n} \sum_{k=1}^{[nt]} I_{\{X_k \le u\}}$$ + +where $t \in [0, 1]$, $u \in \mathbb{R}$ and $I_\{\cdot\}$ is an indicator function. The process $X^{n,\theta}(u) = (X_t^{n,\theta}(u))_{0 \le t \le 1}$ is a special semimartingale on the naturally defined probability filtered space, and its canonical decomposition is given by: + +$$X_t^{n,\theta}(u) = \frac{1}{n} \sum_{k=1}^{[nt]} (I_{\{X_k \le u\}} - I_\theta(u)) + \frac{[nt]}{n} I_\theta(u)$$ + +If we consider a sequence of the diffusion type processes with small noise, then under some natural conditions they will be special semimartingales also. + +One of the aims of this paper is to give the conditions for a weak convergence in $D(\mathbb{R}^+, C_{\text{loc}})$ of a sequence of the special semimartingales $X^{n,\theta}$ to a continuous gaussian semimartingale $X^\theta = (X_t^\theta)_{t \ge 0}$. Here we mention the paper of C. Sibeux [25], where such conditions were obtained in the case of the diffusion type limit process $X^\theta$. + +But what is the relation between the convergence in $D(\mathbb{R}^+, C_{\text{loc}})$ and the convergence of the estimators of type arg sup or arg inf like maximum likelihood estimators or minimum distance estimators? Using the Skorohod representation theorem one can see that the convergence in $D(\mathbb{R}^+, C_{\text{loc}})$ implies uniform on compact sets convergence of processes which, under some supplementary conditions (as the ones of lemma 3 of [1]) assuring in fact the continuity of arg sup or arg inf, gives the convergence of the estimators. \ No newline at end of file diff --git a/samples/texts/6084721/page_20.md b/samples/texts/6084721/page_20.md new file mode 100644 index 0000000000000000000000000000000000000000..09551c8af33b976f94cbc9a003d185d3585c1667 --- /dev/null +++ b/samples/texts/6084721/page_20.md @@ -0,0 +1,33 @@ +but + +$$L \lim_{\epsilon \to 0} \frac{\| X^{\theta_0 + 2Lb_n\epsilon} - X^{\theta_0} \|_T}{2\epsilon L} = \| (\hat{X}^{\theta_0}, b_n) \|_T \le 1$$ + +for sufficiently big $n$, contradicting with condition 1). + +If we suppose that there exists $\nu_0$ such that $\inf_{|\theta-\theta_0|>\nu_0} ||X^\theta - X^{\theta_0}||_T = 0$, then for every $L > 0$ there exists $\epsilon_0$ such that for all $0 < \epsilon < \epsilon_0$ we have $L\epsilon \le \nu_0$ and + +$$\inf_{|\beta|>L} ||X^\theta - X^{\theta_0}||_T \le \inf_{|\theta-\theta_0|>\nu_0} ||X^\theta - X^{\theta_0}||_T = 0$$ + +which also contradicting condition 1). + +We suppose now that the conditions 2) are satisfied and we fix $L > 0$. By the differentiability condition we can find $\nu_0 > 0$ such that for all $|\theta-\theta_0| \le \nu_0$ we have + +$$||X_\theta - X_{\theta_0} - {}^\top (\theta - \theta_0) \hat{X}^{\theta_0}||_T \le 1/2c|\theta - \theta_0|,$$ + +where $c > 0$ is taken from the non-singularity condition. By the condition of identifiability, we have $\inf_{|\theta-\theta_0|>\nu} ||X^\theta - X^{\theta_0}||_T > 0$ and by the continuity of $X_\theta$ in $\theta_0$, we can find $0 < \delta < \nu_0$ such that for all $|\theta - \theta_0| \le \delta$ we have + +$$||X^\theta - X^{\theta_0}||_T < \inf_{|\theta-\theta_0|>\nu_0} ||X^\theta - X^{\theta_0}||_T.$$ + +Choose $\epsilon_0$ such that for all $0 < \epsilon < \epsilon_0$ we have $\epsilon L < \delta$. then for such $\epsilon$ we obtain that + +$$ +\begin{align*} +\inf_{|\beta|>L} ||\frac{1}{\epsilon}(X^{\theta_0+\epsilon\beta} - X^{\theta_0})||_T &= \frac{1}{\epsilon} \inf_{|\theta-\theta_0|>L\epsilon} ||X^\theta - X^{\theta_0}||_T \\ +&= \frac{1}{\epsilon} \inf_{L<|\beta|<\nu_0/\epsilon} ||X^\theta - X^{\theta_0}||_T = \inf_{L<|\beta|<\nu_0/\epsilon} ||\frac{1}{\epsilon}(X^{\theta_0+\beta\epsilon} - X^{\theta_0})||_T \\ +&\ge \frac{1}{\epsilon} \inf_{L<|\beta|<\nu_0/\epsilon} (||(\hat{X}^{\theta_0}, \beta\epsilon)||_T - ||X_\theta - X_{\theta_0} - {}^\top (\theta - \theta_0) \hat{X}^{\theta_0}||_T) \ge 1/2cL +\end{align*} +$$ + +which tends to infinity as $L \to \infty$. $\square$ + +To prove our theorem we start with a discussion of the relation between the convergence in $D(\mathbb{R}^+, C_{loc})$ and the continuity of the functional $\arg\inf$. \ No newline at end of file diff --git a/samples/texts/6084721/page_21.md b/samples/texts/6084721/page_21.md new file mode 100644 index 0000000000000000000000000000000000000000..835247b1132834dfded4b99488e2981026756e28 --- /dev/null +++ b/samples/texts/6084721/page_21.md @@ -0,0 +1,32 @@ +We denote by $D(\\mathcal{H}^+, C_{\text{loc}}^0)$ the subspace of the space $D(\\mathcal{H}^+, C_{\text{loc}})$ containing +the elements $Z$ such that for all continuity points $1/N$ and $N$ we have + +$$ \sup_{1/N \le t \le N} \sup_{|\beta| > L} |Z_t(\beta)| \to 0 $$ + +as $L \to \infty$. For $Z \in D(\mathcal{H}^+, C_{\text{loc}}^0)$ and a point of continuity $T > 0$ of $Z$ we set + +$$ \hat{\beta}_T^n = \arg \sup_{\beta \in \mathbb{R}^m} |Z_T^n(\beta)|, \quad \hat{\beta}_T = \arg \sup_{\beta \in \mathbb{R}^m} |Z_T(\beta)|. $$ + +We suppose as usual that if $\sup_{\beta \in \mathbb{R}^m} |Z_T^n(\beta)|$ is achieved in several points we choose one of them arbitrarily. We note that $\sup_{\beta \in \mathbb{R}^m} |Z_T^n(\beta)|$ can be not achieved. In this case we put $\arg \sup_{\beta \in \mathbb{R}^m} |Z_T^n(\beta)| = +\infty$ and $|Z_T^{n_+(+\infty)}| = \overline{\lim}_{|\beta| \to +\infty} |Z_T^n(\beta)|$. + +**Lemma 2** We suppose that the sequence of elements $Z^n = (Z_t^n(\beta))_{\beta \in \mathbb{R}^m, t \ge 0}$ from $D(\mathcal{H}^+, C_{\text{loc}})$ converges in the Skorohod topology to an element $Z = (Z_t(\beta))_{\beta \in \mathbb{R}^m, t \ge 0}$ from $D(\mathcal{H}^+, C_{\text{loc}}^0)$. We suppose also that for a point of continuity $T$ of $Z$ we have: + +$$ \lim_{L \to +\infty} \overline{\lim}_{n \to +\infty} \sup_{|\beta| > L} |Z_T^n(\beta)| = 0. \qquad (22) $$ + +Then, if $\hat{\beta}_T$ is unique, we have + +$$ \hat{\beta}_T^n \to \hat{\beta}_T. $$ + +**Remark 2** One easily sees that there are examples where the condition of lemma 1 is not satisfied and the convergence of $\hat{\beta}_T^n$ fails. + +**Proof.** If we suppose that $\hat{\beta}_T^n$ does not converge to $\hat{\beta}_T$ then we can have two cases: + +a) there exists a compact set $\mathcal{K}$ such that $\hat{\beta}_T^n \in \mathcal{K}$ for each $n \ge 1$; + +b) there exists a subsequence $(n)$ such that $|\hat{\beta}_T^n| \to +\infty$. + +In case a) one easily gets a contradiction with the convergence of $Z^n$ to +$Z$ in $D(\mathcal{H}^+, C_{\text{loc}})$ and the uniqueness of $\hat{\beta}_T$. In the case b) we write for each +$L > 0$ + +$$ \overline{\lim}_{n \to \infty} |Z_T^n(\hat{\beta}_T^n) - Z_T(\hat{\beta}_T^n)| \le \overline{\lim}_{n \to +\infty} \sup_{|\beta| > L} |Z_T^n(\beta)| + \sup_{|\beta| > L} |Z_T(\beta)| $$ \ No newline at end of file diff --git a/samples/texts/6084721/page_22.md b/samples/texts/6084721/page_22.md new file mode 100644 index 0000000000000000000000000000000000000000..e362a8d8d4680ad8960ffd9c31c1e1f391520804 --- /dev/null +++ b/samples/texts/6084721/page_22.md @@ -0,0 +1,32 @@ +and then letting $L \to +\infty$ and using condition (22) we get that + +$$ \overline{\lim}_{n \to \infty} |Z_T^n(\hat{\beta}_T^n) - Z_T(\hat{\beta}_T^n)| = 0. $$ + +Since $Z \in D(IR^+, C_{loc}^0)$ this leads to + +$$ \overline{\lim}_{n \to \infty} Z_T^n(\hat{\beta}_T^n) = 0. $$ + +But $|Z_T^n(\hat{\beta}_T^n)| = \sup_{\beta \in \mathbb{R}^m} |Z_T^n(\beta)|$ and the convergence in $D(IR^+, C_{loc})$ gives $\sup_{\beta \in K_i} |Z_T^n(\beta)| \to \sup_{\beta \in K_i} |Z_T(\beta)|$ for each $i \ge 1$. This implies that $Z_T(\beta) = 0$ for all $\beta \in IR^m$, contradicting the unicity of $\hat{\beta}_T$. $\square$ + +**Lemma 3** We suppose that $Z^n$ is a sequence of random processes with paths in $D(IR^+, C_{loc})$ converging weakly to $Z \in D(IR^+, C_{loc}^0)$. Let $T$ be a point of continuity of $Z$. If $\hat{\beta}_T$ is unique and for every $\delta > 0$ we have + +$$ \lim_{L \to +\infty} \overline{\lim}_{n \to +\infty} P(\sup_{|\beta|>L} |Z_T^n(\beta)| \ge \delta) = 0. \quad (23) $$ + +then $\hat{\beta}_T^n$ converges weakly to $\hat{\beta}_T$. + +**Proof.** The application $\phi(Z) = \arg\sup_{\beta \in \mathbb{R}^m} |Z(\beta)|$ with the usual convention concerning $\arg\sup$ as given above, is continuous on the set + +$$ A = \{Z \in C_{\text{loc}} : \limsup_{L \to +\infty} \sup_{|\beta|>L} |Z(\beta)| = 0\}. $$ + +Denoting by $Q^n$ and $Q$ the laws on $C_{loc}$ of $Z_T^n$ and $Z_T$ respectively. If $Q(A^c) = 0$ we have $\phi(Z_T^n) \to \phi(Z_T)$ as $n \to \infty$ weakly, but $Q(A^c) = 0$ is equivalent to + +$$ Q\left(\limsup_{L \to \infty} |Z(\beta)| > \delta\right) = 0. $$ + +for every $\delta > 0$, so we have + +$$ +\begin{align*} +& Q\left(\limsup_{L \to \infty} |Z(\beta)| > \delta\right) = \lim_{L \to \infty} Q\left(\sup_{|\beta|>L} |Z(\beta)| > \delta\right) \\ +& \leq \lim_{L \to \infty} \overline{\lim}_{n \to \infty} Q^n\left(\sup_{|\beta|>L} |Z(\beta)| > \delta\right) = 0. +\end{align*} +$$ \ No newline at end of file diff --git a/samples/texts/6084721/page_23.md b/samples/texts/6084721/page_23.md new file mode 100644 index 0000000000000000000000000000000000000000..14c378c85c7ea6bcfaf93dc1b9e8136eccfa6d40 --- /dev/null +++ b/samples/texts/6084721/page_23.md @@ -0,0 +1,23 @@ +**Proof of theorem 5.** To prove a weak convergence of the processes $Y^t$ to $Y$ in $D(IR^+, C_{loc})$, as defined in Remark 1 by (20) and (21) respectively, we cannot directly apply the semimartingale convergence theorem, because for predictable continuous processes the martingale problem does not have a unique solution. This is why we start with the proof of a weak convergence of $U^t = (U_t^{t,\beta})_{\beta \in \mathbb{R}^m, t \ge 0}$ to $U = (U_t^\beta)_{\beta \in \mathbb{R}^m, t \ge 0}$ in $D(IR^+, C_{loc})$ where + +$$ U_t^{t,\beta} = \frac{1}{\epsilon}(X_t^{t,\theta_0} - X_t^{\theta_0}) - {}^{\top}\beta \hat{X}_t^{\theta_0} \quad \text{and} \quad U_t^\beta = U_t^{\theta_0} - {}^{\top}\beta \hat{X}_t^{\theta_0}. $$ + +Since + +$$ U^{t,\beta} = \frac{1}{\epsilon}(A^{t,\theta_0} - X^{\theta_0}) - {}^{\top}\beta \hat{X}^{\theta_0} + \frac{1}{\epsilon}M^{t,\theta_0} $$ + +conditions 1) - 4) imply the conditions of group 1 in theorem 3. The condition of group 2 is automatically satisfied since the martingale parts are independent on $\beta$. The conditions of group 3 are also satisfied since for all $0 \le t \le T$ we have + +$$ \left| \frac{1}{\epsilon}(A_t^{t,\theta_0} - X_t^{\theta_0}) - {}^{\top}\beta \hat{X}_t^{\theta_0} - \left(\frac{1}{\epsilon}(A_t^{t,\theta_0} - X_t^{\theta_0}) - {}^{\top}\beta' \hat{X}_t^{\theta_0}\right) \right| \le |\beta - \beta'| \sup_{0 \le t \le T} |\hat{X}_t^{\theta_0}|. $$ + +Since the norm $||\cdot||$ is bounded above by the supremum norm, using the Skorohod representation theorem we see that the processes $||U^t|| = (||U^{t,\beta}||_t)_{\beta \in \mathbb{R}^m, t \ge 0}$ converges weakly to $||U^t|| = (||U^\beta||_t)_{\beta \in \mathbb{R}^m, t \ge 0}$ in the space $D(IR^+, C_{loc})$. + +From condition 5) we have for all $0 < t \le T$ and $\beta \in K_i$ that + +$$ ||\frac{1}{\epsilon}(X_t^{t,\theta_0} - X^{t,\theta_0+\beta})||_t - ||U^{t,\theta_0} - {}^{\top}\beta \hat{X}^{t,\theta_0}||_t \le ||\frac{1}{\epsilon}(X^{t,\theta_0+\beta} - X^{t,\theta_0}) - {}^{\top}\beta \hat{X}^{t,\theta_0}||_T \to 0, \quad (24) $$ + +as $\epsilon \to 0$ and hence we have the weak convergence of $Y^t$ to $Y$ in $D(IR^+, C_{loc})$ as defined in Remark 1 by (20) and (21) respectively. + +To prove the convergence of $\hat{\beta}_T^t$ to $\hat{\beta}_T$ we use the Skorohod representation theorem and lemma 2 with the process $Z^t$ defined by : for $t \ge 0$ + +$$ Z_t^{t,\beta} = \frac{1}{1 + Y_t^{t,\beta}}. $$ \ No newline at end of file diff --git a/samples/texts/6084721/page_24.md b/samples/texts/6084721/page_24.md new file mode 100644 index 0000000000000000000000000000000000000000..79d626eef5c62749eddc882eaafbd12c4617b4b5 --- /dev/null +++ b/samples/texts/6084721/page_24.md @@ -0,0 +1,39 @@ +We now establish that (23) holds. We note that for all $|\beta| \ge L$ and for all $\epsilon \le \epsilon_0(L)$ + +$$ +\begin{aligned} +Y_T^{\epsilon, \beta} &\ge \frac{1}{\epsilon} ||X^{\beta_0+\epsilon\beta} - X^{\beta_0}||_T - \frac{1}{\epsilon} ||X^{\epsilon,\beta_0} - X^{\beta_0}||_T \\ +&\ge \inf_{|\beta| \ge L} \frac{1}{\epsilon} ||X^{\beta_0+\epsilon\beta} - X^{\beta_0}||_T - \frac{1}{\epsilon} ||X^{\epsilon,\beta_0} - X^{\beta_0}||_T, +\end{aligned} +$$ + +implying that + +$$ +\begin{align*} +P^e(\sup_{|\beta| \ge L} Z_T^{\epsilon, \beta} \ge \delta) &= P^e(\inf_{|\beta| \ge L} Y_T^{\epsilon, \beta} \le \frac{1}{\delta} - 1) \\ +&\le P^e(\frac{1}{\epsilon} ||X^{\epsilon, \beta_0} - X^{\beta_0}||_T \ge \inf_{|\beta| \ge L} \frac{1}{\epsilon} ||X^{\beta_0+\epsilon\beta} - X^{\beta_0}||_T - \frac{1}{\delta} + 1). +\end{align*} +$$ + +Choosing $\delta$ to have the points of continuity of the law of $||U^{\beta_0}||_T$ and letting $\epsilon \to 0$, $L \to +\infty$ we obtain (23) using the strong identifiability condition. + +From a weak convergence of $Y^e$ to $Y$ in $D(II^+; C_{loc})$, the condition (23) and the uniqueness of $\hat{\beta}_T$ we get a weak convergence of $\hat{\beta}_T^e$ to $\hat{\beta}_T$. $\square$ + +**Corollary 1** (cf. Kutoyants([13]) Let the conditions 1)-(6) of theorem 5 be satisfied and let $||.||$ be an $L_2$-norm. We suppose that the matrix + +$$ V_T^{\beta_0} = \int_0^T X_s^{\beta_0} \, dS $$ + +is invertible. Then + +$$ \hat{\beta}_T = (V_T^{\beta_0})^{-1} \int_0^T U_s^{\beta_0} X_s^{\beta_0} \, ds $$ + +and the law of $\hat{\beta}_T^e$ is asymptotically gaussian $N(m_T, \sigma_T^2)$ with + +$$ m_T = (V_T^{\beta_0})^{-1} \int_0^T E B_s^{\beta_0} X_s^{\beta_0} \, ds, $$ + +$$ \sigma_T^2 = (V_T^{\beta_0})^{-1} \left[ \int_0^T \int_0^T E(N_s^{\beta_0} N_u^{\beta_0}) X_s^{\beta_0 \top} X_u^{\beta_0} \, dsdu \right] (V_T^{\beta_0})^{-1}. $$ + +We now consider the Ornstein-Uhlenbeck process defined by the equation + +$$ dX_t = \theta X_t dt + \epsilon dW_t, $$ \ No newline at end of file diff --git a/samples/texts/6084721/page_25.md b/samples/texts/6084721/page_25.md new file mode 100644 index 0000000000000000000000000000000000000000..4a04e0c632225ca2c2c3eb6e19354956a2f417bd --- /dev/null +++ b/samples/texts/6084721/page_25.md @@ -0,0 +1,17 @@ +where $W = (W_t)_{t \ge 0}$ is a standard Wiener process, $t \in [0, T]$, $X_0 = x_0 \ne 0$ and $\theta \in \Theta$. $\Theta$ is a compact convex subset of $\mathbb{R}^n$. One can easily show that $X_t^\theta = x_0 \exp(\theta t)$ and that + +$$A_t^{\theta} = \int_0^t \theta X_s ds, \quad \langle M^{\theta}, \rangle_t = \langle \epsilon W \rangle_t = \epsilon^2 t.$$ + +In this case the conditions of theorem 1 are satisfied and the conditions +1)-4) of theorem 5 are also satisfied with $b(t, X) = \theta_0 X_t$ and $c(t, X) = t$. +Conditions 5), 6) are automatically satisfied since for $\theta \in \Theta$, $\|x_0 t \exp(\theta t)\|_T >$ +0. Since + +$$\hat{\beta}_T = \arg \inf_{\beta \in \mathbb{R}^m} ||W_t - \beta x_0 t \exp(\theta_0 t)||_T.$$ + +condition 7) is satisfied for instance, for sup- and $L^p$-norms with $p \ge 1$. (see +Hlaváček [6]). + +**Corollary 2** (cf. Hlaváček [6], Kutoyants, Pilibossian [17]) In the case of the Ornstein-Uhlenbeck process the minimum distance estimators are consistent. If $\hat{\beta}_T$ is unique, then the normalised minimum distance estimators converge weakly to $\beta_T$. If in addition we take the $L^2$-norm, the normalised minimum distance estimators will be asymptotically $\mathcal{N}(0, \sigma_T^2)$ where + +$$\sigma_T^2 = \left( \int_0^T \int_0^T st(t \wedge s) \exp((s+t)\theta_0) ds dt \right) \left( x_0 \int_0^T s^2 \exp(2s\theta_0) ds \right)^{-2} .$$ \ No newline at end of file diff --git a/samples/texts/6084721/page_26.md b/samples/texts/6084721/page_26.md new file mode 100644 index 0000000000000000000000000000000000000000..f350cbc0369dfce9d8abbab9e4b8525e134ad976 --- /dev/null +++ b/samples/texts/6084721/page_26.md @@ -0,0 +1,23 @@ +References + +[1] P. BERTRAND AND YU.A. KUTOYANTS. *A minimum distance estimator for partially observed linear stochastic systems*. Preprint 95-1, 1995. + +[2] P. BILLINGSLEY. *Convergence of probability measures*. Wiley, New-York, 1968. + +[3] E. BOLTHAUSEN. *Convergence in distribution of minimum distance estimators*. Metrika 24, p. 215-227, 1977. + +[4] K. DZIAPARIDZE ET E. VALKEILA. *On the Hellinger type distances for filtered experiments*. Probability Theory and related Fields 85, p. 105-117, 1990. + +[5] E. FOURNIE AND YU.A. KUTOYANTS. *Estimateur de la distance minimale pour processus de diffusion ergodiques*. INRIA, Rapports de recherche N 1952, p. 1-33, juillet 1993. + +[6] S. HENAFF. *On minimum distance estimate of the parameter of the Ornstein-Uhlenbeck process*. Preprint 95-12, 1995. + +[7] R. HÖPFNER AND YU.A. KUTOYANTS. *On minimum distance estimation in recurrent Markov step process I and II*. (submitted for publication). + +[8] J. JACOD Calcul stochastique et problème de martingales. Lecture Notes in Mathematics 714, Springer-Verlag, 1979. + +[9] J. JACOD ET P. MANO. *Une évaluation de la distance entre les lois d'une semi-martingale et d'un processus à accroissements indépendants*. Stochastics, Vol. 25, p. 87-124, 1988. + +[10] J. JACOD AND A.N. SHIRYAYEV. *Limit theorems for stochastic processes*. Springer-Verlag, Berlin, 1987. + +[11] H. KOUL AND T. DE WET. *Minimum distance estimation in a linear regression model*. The Annals of Statistics, Vol. 11, n°3, p. 921-932, 1983. \ No newline at end of file diff --git a/samples/texts/6084721/page_27.md b/samples/texts/6084721/page_27.md new file mode 100644 index 0000000000000000000000000000000000000000..167b048b4af2cb5df3dd5bdd8431e3eec0166a43 --- /dev/null +++ b/samples/texts/6084721/page_27.md @@ -0,0 +1,23 @@ +[12] YU.A. KUTOYANTS. *Minimum distance parameter estimation for diffusion type observations*. C. R. Acad. Sci. Paris, 312, série 1, p. 637-642, 1991. + +[13] YU.A. KUTOYANTS. *Identification of dynamical systems with small noise*. Kluwer Academic Publishers, London, 1995. + +[14] YU.A. KUTOYANTS AND O. LESSI. *Minimum distance estimation for diffusion random fields*. Preprint, 1995. + +[15] YU.A. KUTOYANTS AND F. LIESE. *On minimum distance estimation for spatial Poisson processes*. Ann. Academiae Scient. Fennicae, ser. A.1, Vol. 17, p. 65-71, 1992. + +[16] YU.A. KUTOYANTS AND T. MOURID. *Estimation dans un modèle autoregressif avec retards*. C.R.Acad. Paris, 315, sér 1, p. 455-458, 1992. + +[17] YU.A. KUTOYANTS AND P. PILIBOSSIAN. *On minimum $L_1$ estimate of the parameter of the Ornstein-Uhlenbeck process*. Statistics and Probability Letters, 20, p. 117-123, 1994. + +[18] C.PARR. *Minimum distance estimation : bibliography*. Commun. Statist. Theor. Meth., A10(12), 1205-1224. + +[19] T. PFAFF. *Quick consistency of quasi-maximum likelihood estimators*. The Annals of Statistics, v. 10, n. 3, p. 990-1005, 1982. + +[20] V.N. LA RICCIA. *Asymptotic properties of weighted $L^2$ quantile distance estimators*. The Annals of Statistics, Vol. 10, n°2, p. 621-624, 1982. + +[21] R.SII. LIPTSER AND A.N. SHIRYAYEV. *Theory of martingales*. Mathematics and its applications, Kluwer Academic Publishers, Netherlands, 1989. + +[22] P.W. MILLAR. The minimax principle in asymptotic statistical theory. Lecture Notes in Mathematic, 976, p. 76-265, 1983. + +[23] P.W. MILLAR. A general approach to the optimality of minimum distance estimators. Trans. Amer. Math. Soc., 286, p. 377-418, 1984. \ No newline at end of file diff --git a/samples/texts/6084721/page_28.md b/samples/texts/6084721/page_28.md new file mode 100644 index 0000000000000000000000000000000000000000..f4380e38e3c734fb2c729b4f847781cd468d5415 --- /dev/null +++ b/samples/texts/6084721/page_28.md @@ -0,0 +1,16 @@ +[24] A.N. SHIRYAYEV AND V. SPOKOINII. *Statistics of random processes.* +(submitted for publication) + +[25] C. SIBEUX *Convergence faible uniforme des semi-martingales vers* +*un processus de diffusion*. Preprint, Department of mathematics, +University of Angers, n° 25,1996. + +[26] L.YU. VOSTRIKOVA. *On a weak convergence of likelihood ratio processes* +*of general statistical parametric models*. Stochastics 23, p. 277-298, 1988. + +[27] L.YU. VOSTRIKOVA. *On weak convergence of parameter estimators of* +*general statistical parametric models*. Séminaire de Probabilités, Rennes, +p. 146-167, 1987. + +[28] J. WOLFOWITZ *Minimum distance method*. Ann. Math. Statist., 28, p. +75-88, 1957. \ No newline at end of file diff --git a/samples/texts/6084721/page_3.md b/samples/texts/6084721/page_3.md new file mode 100644 index 0000000000000000000000000000000000000000..9b35da4573a65755b0bc7b6d823e126eee6fd8a2 --- /dev/null +++ b/samples/texts/6084721/page_3.md @@ -0,0 +1,13 @@ +Minimum distance estimators were studied by many authors. Among the papers published on this subject we mention the papers of Wolfowitz [28], La Riccia [20], Bolthausen [3], Koul and Wet [11], Millar [22, 23]; some other references we can find in Parr [18]. Notably we draw attention to the paper of Millar [23] "A General Approach of the Optimality of Minimum Distance Estimators", where one can find the general conditions on the observed processes providing the consistency, the asymptotic normality and the optimality of the minimum distance estimators. For stochastic processes the minimum distance estimators were studied: for diffusion processes in the paper of Kutoyants [12], Kutoyants and Pilibossian [17], Ilenaff [6], for the diffusion processes with delay in the papers of Kutoyants and Mourid [16], for the diffusion ergodic processes in Fournie, Kutoyants [5], for spacial Poisson processes by Kutoyants, Liese [15], for partially observed linear stochastic systems by Bertrand and Kutoyants [1], for recursive markov processes by Löpfner, Kutoyants [7], for diffusion random fields by Kutoyants, Lessi [14]. Most of these results can also be found in the book of Kutoyants [13]. + +From the examples given in the paper of Millar [23] and in the book of Kutoyants [13], we can deduce that a natural formulation of the minimum distance problem for semimartingales could be the following. We suppose that on the probability space $(\Omega, \mathcal{F}, \mathbb{I}^{\epsilon}, P^{\epsilon})$ with the filtration $\mathbb{I}^{\epsilon} = (\mathcal{F}_{t})_{t \ge 0}$ we have the sequence of the special semimartingales $X^{\epsilon, \theta} = (X_{t}^{\epsilon, \theta})_{t \ge 0}$ such that + +$$X_t^{\epsilon, \theta} = X_0^{\epsilon, \theta} + A_t^{\epsilon, \theta} + M_t^{\epsilon, \theta} \quad (1)$$ + +where $A_t^{\epsilon, \theta} = (A_t^{\epsilon, \theta})_{t \ge 0}$ is a predictable process of the locally integrable variation and $M_t^{\epsilon, \theta} = (M_t^{\epsilon, \theta})_{t \ge 0}$ is a local martingale. We suppose that as $\epsilon \to 0$ the $X^{\epsilon, \theta}$ converge weakly of some deterministic cadlag function of locally bounded variation $X^{\theta} = (X_t^{\theta})_{t \ge 0}$, so that $X^{\epsilon, \theta}$ can be interpreted as the observation of $X^{\theta}$ with the systematic error $A^{\epsilon, \theta} - X^{\theta}$ and the martingale noise $M^{\epsilon, \theta}$. + +We suppose that in the space of the trajectories $D([0, T], \mathcal{R})$ with $T > 0$, we fixed the norm $\|\cdot\|_{T}$. Then we can define the minimum distance estimator as follows: + +$$\hat{\theta}_T^{\epsilon} = \arg \inf_{\theta \in \Theta} \| X^{\epsilon, \theta_0} - X^{\theta} \|_T$$ + +with the following convention about $\arg\inf$: if $\inf$ is not achieved it is equal to $\infty$ and if it is achieved in several points we choose one arbitrarily. The \ No newline at end of file diff --git a/samples/texts/6084721/page_4.md b/samples/texts/6084721/page_4.md new file mode 100644 index 0000000000000000000000000000000000000000..9cd82c5e021a7dc41cf6856bbaf4118a235649a0 --- /dev/null +++ b/samples/texts/6084721/page_4.md @@ -0,0 +1,34 @@ +aim of this paper is to find conditions expressed in terms of the triplets of +the observed semimartingales, under which the minimum distance estimators +are consistent and asymptotically normal. + +In 2. we give the criteria for convergence in the space $D(\mathcal{H}^+, C_{\text{loc}})$. In +3. we express the conditions in terms of the triplets of semimartingales +providing a weak convergence in $D(\mathcal{H}^+, C_{\text{loc}})$ (see theorem 3). In 4. we +give the conditions which imply consistency (see theorem 4) and a weak +convergence of the normalized estimators (see theorem 5). For the $L_2$-norm +we obtain under the conditions of the theorem 5 the asymptotic normality +of the minimum distance estimators (see corollary 1). + +The main tool used in this paper is the general theory of stochastic +processes, and we refer the readers for the technical details to the books +of Jacod [8], Jacod, Shiryayev [10], Liptser, Shiryayev [21]. + +## 2 The Space D($\mathbb{R}^{+}$, $C_{\text{loc}}$) + +We consider the Skorohod space $D(\mathcal{H}^+, C_{\text{loc}})$ with values in the space of continuous functions $C_{\text{loc}}(\mathcal{H}^m)$ endowed with the locally uniform metric. We recall that if $(K_i)_{i \ge 1}$ is a sequence of increasing compact sets from $\mathcal{H}^m$, then the distance $d(X,Y)$ between two elements $X = (X^\theta)_{\theta \in \mathbb{R}^m}$ and $Y = (Y^\theta)_{\theta \in \mathbb{R}^m}$ of the space $C_{\text{loc}}(\mathcal{H}^m)$ can be defined by: + +$$d(X, Y) = \sum_{i=1}^{+\infty} 2^{-i} \frac{\sup_{\theta \in K_i} |X^\theta - Y^\theta|}{1 + \sup_{\theta \in K_i} |X^\theta - Y^\theta|}.$$ + +It is well known that the space $D(\mathbb{R}^+, C_{\text{loc}})$ endowed with the Skorohod +distance is a complete separable space, since $C_{\text{loc}}(\mathbb{R}^m)$ is itself complete and +separable. This fact permits us to use Prohorov's theorem [2] about tightness +and relative compactness to obtain criteria for a weak convergence. + +To characterize the compact sets of the space $D(\mathcal{H}^+, C_{\text{loc}})$, we introduce +the following moduli of continuity the first of which is the standard modulus +of the continuity in the space $D(\mathcal{H}^+, \mathcal{H})$ applied to $X^\theta$ with a fixed value of +$\theta \in \mathcal{H}^m$. + +Denoting by $\mathcal{G}$ the set of the partitions $\{0 = t_0 < t_1 < \dots < t_n < N\}$ of +the interval $[0,N]$ with $N > 0$, satisfying the condition $\min_{0 \le j \le n-1} (t_{j+1} - t_j) > h$. \ No newline at end of file diff --git a/samples/texts/6084721/page_5.md b/samples/texts/6084721/page_5.md new file mode 100644 index 0000000000000000000000000000000000000000..e0efd22112ad13681dbe06498a831060887df834 --- /dev/null +++ b/samples/texts/6084721/page_5.md @@ -0,0 +1,40 @@ +we set + +$$ +W_h^N(X^\theta) = \inf_{\mathcal{G}} \max_{0 \le j \le n} \sup_{s, t \in [t_j, t_{j+1}] | X_s^\theta - X_t^\theta|} |X_s^\theta - X_t^\theta| +$$ + +and let + +$$ +V_h^{N,i}(X) = \sup_{0 \le t \le N} \sup_{\substack{e, \theta' \in K_i \\ |\theta - \theta'| \le h}} |X_t^\theta - X_t^{\theta'}|. +$$ + +where $K_i$ is a compact set for the above definition of the distance in the space +$C_{\text{loc}}(\mathbb{R}^m)$. + +The following theorem gives the necessary and sufficient conditions for +the relative compactness in $D(\mathbb{R}^+, C_{\text{loc}})$. + +**Theorem 1** (see [26],[25]) A subset K of D(ℝm, Cloc) is relatively compact if and only if the following conditions are satisfied: for each i ≥ 1 and N > 0 + +1) $\sup_{X \in \mathcal{K}} \sup_{0 \le t \le N} \sup_{\theta, \theta' \in K_i} |X_t^\theta| < \infty,$ + +2) $\lim_{h \to 0} \sup_{X \in \mathcal{K}} \sup_{\theta \in K_i} W_h^N(X^\theta) = 0.$ + +3) $\lim_{h \to 0} \sup_{X \in \mathcal{K}} V_h^{N,h}(X) = 0.$ + +Now suppose that we have a sequence of stochastic processes $X^n = (X_t^{n,\theta})_{t \ge 0, \theta \in \mathbb{R}^m}$ and a “limit” process $X = (X_t^\theta)_{t \ge 0, \theta \in \mathbb{R}^m}$ with trajectories in the space $D(\mathbb{R}^+, C_{\text{loc}})$, given on the probability spaces $(\Omega^n, \mathcal{F}^n, P^n)$ and $(\Omega, \mathcal{F}, P)$ respectively. Then using Prohorov's theorem [2] about the tightness and relative compactness and theorem 1 we can obtain the following. + +**Theorem 2** (see [26],[25]). We assume that finite-dimensional distributions of $X^n$ converge weakly to finite-dimensional distributions of $X$. We suppose also that for each $\epsilon > 0$ and $N > 0$ + +1) $\lim_{h \to 0} \overline{\lim}_{n \to +\infty} P^n(W_h^N(X^{n,\theta}) \geq \epsilon) = 0, \forall \theta \in \mathbb{R}^m,$ + +2) $\lim_{h \to 0} \sup_{n \ge 1} P^n(V_h^{N,h}(X^n) \ge \epsilon) = 0.$ + +Then we have a weak convergence of $X^n$ to of $X$ in the Skorohod space +$D(\mathbb{R}^+, C_{\text{loc}}):$ + +$$ +X^n \xrightarrow{w(P^n)} X. +$$ \ No newline at end of file diff --git a/samples/texts/6084721/page_6.md b/samples/texts/6084721/page_6.md new file mode 100644 index 0000000000000000000000000000000000000000..f669be1239fc0a933e91a0c77d795526b435941a --- /dev/null +++ b/samples/texts/6084721/page_6.md @@ -0,0 +1,23 @@ +### 3 Weak Convergence of Semimartingales in the space $D(\mathbb{R}^+, C_{\text{loc}})$. + +We suppose that we are given probability spaces $(\Omega^n, \mathcal{F}^n, \mathcal{H}^{n,m}, P^n)$ and $(\Omega, \mathcal{F}, \mathcal{H}', P)$ endowed with the filtrations $\mathcal{H}^m = (\mathcal{F}_t^m)_{t \ge 0}$ and $\mathcal{H}' = (\mathcal{F}_t)_{t \ge 0}$. Let $X^{n,\theta} = (X_t^{n,\theta})_{t \ge 0}$ and $X^\theta = (X_t^\theta)_{t \ge 0}$ be semimartingales depending on the parameter $\theta \in \mathbb{R}^m$, defined on the spaces $(\Omega^n, \mathcal{F}^n, \mathcal{H}^{n,m}, P^n)$ and $(\Omega, \mathcal{F}, \mathcal{H}', P)$ respectively. + +We suppose that the semimartingales $X^{n,\theta}$ are special and locally square-integrable, and that the semimartingale $X^\theta$ is continuous with a deterministic initial value. This means that we have a unique decomposition: + +$$X_t^{n,\theta} = X_0^{n,\theta} + M_t^{n,\theta} + A_t^{n,\theta}, \quad X_t^\theta = X_0^\theta + M_t^\theta + A_t^\theta$$ + +where $M_t^{n,\theta} = (M_t^{n,\theta})_{t \ge 0}$ is a locally square-integrable martingale, $M^\theta = (M_t^\theta)_{t \ge 0}$ is a continuous martingale, and $A_t^{n,\theta} = (A_t^{n,\theta})_{t \ge 0}$ and $A^\theta = (A_t^\theta)_{t \ge 0}$ are predictable processes with locally integrable variation. + +If we denote by $\mu^{n,\theta}$ the jump's measure of the semimartingale $X^{n,\theta}$ and by $\nu^{n,\theta}$ its compensator, then we can write the following decomposition (see Jacod [8]): + +$$X_t^{n,\theta} = X_0^{n,\theta} + A_t^{n,\theta} + X_t^{n,\theta,c} + \int_0^t \int_{\mathbb{R}\setminus\{0\}} x d(\mu^{n,\theta} - \nu^{n,\theta}).$$ + +$$X_t^\theta = X_0^\theta + A_t^\theta + X_t^{\theta,c},$$ + +where $X_t^{n,\theta,c} = (X_t^{n,\theta,c})_{t \ge 0}$ and $X^\theta_{c} = (X_t^\theta)_{t \ge 0}$ are continuous martingale components of $X^{n,\theta}$ and $X^\theta$ respectively. + +We denote by $\langle X^{n,\theta,c} \rangle$ and $\langle X^{\theta,c} \rangle$ the predictable variation of the corresponding processes. + +The aim of this part is to give conditions expressed in terms of the predictable characteristics $(A_t^{n,\theta}, \langle X^{n,\theta,c} \rangle, \nu^{n,\theta})$ and $(A^\theta, \langle X^\theta,c \rangle, \nu^\theta)$ of the semimartingales $X^{n,\theta}$ and $X^\theta$ respectively, providing the existence of modifications of the processes $X^n = (X_t^{n,\theta})_{t \ge 0, \theta \in \mathbb{R}^m}$, $X = (X_t^\theta)_{t \ge 0, \theta \in \mathbb{R}^m}$ with trajectories in $D(\mathbb{R}^+, C_{\text{loc}})$ for which a weak convergence in $D(\mathbb{R}^+, C_{\text{loc}})$ takes place. + +We suppose that the standard conditions on the triplets of the "limit" semimartingale, given in the book of Liptser, Shiryayev [21], are satisfied. \ No newline at end of file diff --git a/samples/texts/6084721/page_7.md b/samples/texts/6084721/page_7.md new file mode 100644 index 0000000000000000000000000000000000000000..5d22f061b7240f3b1053e14028538bd151933f3f --- /dev/null +++ b/samples/texts/6084721/page_7.md @@ -0,0 +1,31 @@ +Namely, we suppose that for each $\theta \in \mathcal{H}t^m$, the predictable characteristics +as well as their linear combinations, determine uniquely the solution of the +corresponding semimartingale problem and that for all $t \ge 0$ + +$$A_t^\theta(X) = \int_0^t a^\theta(s, X) du_s, \quad (2)$$ + +$$< M^\theta >_t (X) = \int_0^t c^\theta(s, X) du_s, \qquad (3)$$ + +$$< M^\theta, M^{\theta'} >_t (X, Y) = \int_0^t d^{\theta,\theta'}(s, X, Y) du_s. \quad (1)$$ + +We assume that the functions $a^\theta(t, X)$, $c^\theta(t, X)$, $d^{\theta,\theta'}(t, X, Y)$ are predictable and that their moduli are integrable with respect to $du_s$ where $(u_s)_{s\ge 0}$ is some continuous increasing deterministic function of locally bounded variation. + +We suppose that the functions $a^\theta(t, X)$, $c^\theta(t, X)$ and $d^{\theta,\theta'}(t, X, Y)$ are +continuous in the Skorohod metric and that they verify + +$$|a^\theta(t, X)| \le L(t)(1 + \sup_{s 0$ and $T > 0$ we have: + +1) $X_0^{n,\theta} \xrightarrow{P^n} X_0^{\theta},$ + +2) $\overline{\lim}_{n \to +\infty} P^n\left(\int_0^T \int_0^\infty x^2 d\nu^{n,\theta} \ge \epsilon\right) = 0, \forall \delta > 0.$ \ No newline at end of file diff --git a/samples/texts/6084721/page_8.md b/samples/texts/6084721/page_8.md new file mode 100644 index 0000000000000000000000000000000000000000..761ac79c729be345358ebd38857afca104d4ff09 --- /dev/null +++ b/samples/texts/6084721/page_8.md @@ -0,0 +1,23 @@ +3) $\overline{\lim}_{n \to +\infty} P^n(\sup_{t \le T} |A_t^{n, \theta} - \int_0^t a^\theta(s, X^{n, \theta}) du_s| \ge \epsilon) = 0$. + +4) $\overline{\lim}_{n \to +\infty} P^n(\sup_{t \le T} |_t - \int_0^t c^\theta(s, X^{n,\theta}) du_s| \ge \epsilon) = 0$. + +**Conditions of group 2.** For every $\theta, \theta' \in \mathbb{R}^m$, $\epsilon > 0$ and $T > 0$ we have: + +5) $\overline{\lim}_{n \to +\infty} P^n(\sup_{t \le T} |_t - \int_0^t d^{\theta,\theta'}(s, X^{n,\theta}, X^{n,\theta'}) du_s| \ge \epsilon) = 0$. + +**Conditions of group 3.** There exists $p \ge 2$ and $\alpha > m$ such that for all bounded stopping times $\tau$ and $\theta, \theta' \in K_i$, $\theta \ne \theta'$, $i \ge 1$, we have + +6) $\sup_n (E^n | A_\tau^{n,\theta} - A_{\tau'}^{n,\theta'} |^p / |\theta - \theta'|^\alpha) \le C_i,$ + +7) $\sup_n (E(|_\tau|)^{p/2} / |\theta - \theta'|^\alpha) \le C_i,$ + +8) $\sup_n (E \int_0^\tau \int_{\mathbb{R}^2 \setminus \{(0,0)\}} (x-y)^2 d\nu^{n,\theta,\theta'} / |\theta-\theta'|^\alpha) \le C_i,$ + +9) $\sup_n (E \int_0^\tau \int_{\mathbb{R}^2 \setminus \{(0,0)\}} |x-y|^p d\nu^{n,\theta,\theta'} / |\theta-\theta'|^\alpha) \le C_i.$ + +**Theorem 3** We suppose that the conditions 1)-9) are satisfied. Then there are modifications of the processes $X^n = (X_t^{n,\theta})_{\theta \in \mathbb{R}^m, t \ge 0}$ and $X = (X_t^\theta)_{\theta \in \mathbb{R}^m, t \ge 0}$ with paths in $D(\mathbb{R}^+, C_{\text{loc}})$ such that a weak convergence + +$$X^n \xrightarrow{w(P^n)} X$$ + +in $D(\mathbb{R}^+, C_{\text{loc}})$ takes place. \ No newline at end of file diff --git a/samples/texts/6084721/page_9.md b/samples/texts/6084721/page_9.md new file mode 100644 index 0000000000000000000000000000000000000000..9f4066198a7e2a3cb8ac92bb060239f0ff433079 --- /dev/null +++ b/samples/texts/6084721/page_9.md @@ -0,0 +1,27 @@ +**Proof.** To prove our theorem we verify the conditions of the theorem 2 according to the following plan. We suppose that $X^n$ and X are the paths in $D(\mathbb{R}^+, C_{loc})$. Using the conditions of group 1, we establish a weak convergence of the processes $X^{n,\theta}$ and $X^\theta$ in the Skorohod space $D(\mathbb{R}^+, \mathcal{R})$. In turn, this convergence implies the condition 1) of theorem 2. + +Then, using the conditions of groups 1 and 2 we establish the convergence of the finite-dimensional distributions + +$$ (X^{n,\theta_1}, X^{n,\theta_2}, \dots, X^{n,\theta_l}) \xrightarrow{w(P^n)} (X^{\theta_1}, X^{\theta_2}, \dots, X^{\theta_l}) $$ + +in the Skorohod space $D(\mathbb{R}^+, \mathcal{R}^l)$ for every $l \ge 2$. Finally, using the conditions of group 3 and the inequality (10) given below, we prove that for all $\theta, \theta' \in K_i$ and all bounded stopping times $\tau$: + +$$ \sup_n L^n |X_\tau^{n,\theta} - X_\tau^{n,\theta'}|^p / |\theta - \theta'|^\alpha \le \bar{C}_i. $$ + +Using specially choosen stopping time, this inequality and the lemma about the estimation of the modulus of continuity permit us to verify the condition 2) of theorem 2. + +To prove a weak convergence of $X^{n,\theta}$ to $X^\theta$ for each $\theta \in \mathbb{R}^m$, in the Skorohod space $D(\mathbb{R}^+, \mathcal{R})$, it is sufficient to verify the conditions of theorem 1 of Liptser, Shiryayev [21] p. 608-609. In fact, since the "limit" semimartingale is continuous we have $\nu^\theta = 0$ and the conditions $U_1$ and $U_2$ of this theorem follow from the condition 2) of group 1. + +The process $B_t^{n,\theta,a}$ appearing in the canonical decomposition of the semimartingale $X^{n,\theta}$ is such that + +$$ B_t^{n,\theta,a} = A_t^{n,\theta} + \int_0^t \int_{|x|>a} x d\nu^{n,\theta} $$ + +and condition 2) gives + +$$ \sup_{t \le L} |B_t^{n,\theta,a} - A_t^{n,\theta}| \xrightarrow{P^n} 0. $$ + +then we get condition $U_3$ after replacing $A_t^{n,\theta}$ by $B_t^{n,\theta,a}$. + +The expression for the predictable variation of the square-integrable martingale $M^{n,\theta,a}$ appearing in the canonical decomposition of the semimartingale $X^{n,\theta}$ is: + +$$ \langle M^{n,\theta,a} \rangle_t = \langle X^{n,\theta,c} \rangle_t + \int_0^t \int_{|x|\le a} x^2 d\nu^{n,\theta} - \sum_{0 = < \pi_\ell^\varphi(\xi_2)^* \pi_\ell^\varphi(\xi_1) \eta_1 | \eta_2 > . +$$ + +The important point here is that $\pi_\ell^\varphi(\xi_2)^*\pi_\ell^\varphi(\xi_1) \in \mathcal{L}(L^2(\mathcal{M})_\mathcal{M}) = \mathcal{M}$. The closure of $\mathcal{D}(\mathfrak{H}, \varphi) \otimes_K$ in this inner product, modulo the null space, is once again $\mathfrak{H} \otimes_\varphi K$. + +(If we do choose a row representation as in (5.2), we have + +$$ +\pi_{\ell}^{\varphi}\left(\left(\begin{array}{c} +x_{1} \varphi^{1 / 2} \\ +x_{2} \varphi^{1 / 2} \\ +\vdots +\end{array}\right)\right)=L\left(\left(\begin{array}{c} +x_{1} \\ +x_{2} \\ +\vdots +\end{array}\right)\right) . +$$ + +The paper [F] contains more exposition of this approach, including some alter- +nate constructions. + +**6. Realization of the relative tensor product as composition of unbounded operators** + +In this section we briefly indicate how (5.1) can be rigorously justified and +extended. Readers are referred to the sources for all details. + +In his pioneering theory of noncommutative $L^p$ spaces, Haagerup [H] estab- +lished a linear isomorphism between $M_*^+$ and a class of positive unbounded oper- +ators affiliated with the core of $\mathcal{M}$. (The core, well-defined up to isomorphism, is +the crossed product of $\mathcal{M}$ with one of its modular automorphism groups.) We will +denote the operator corresponding to the positive functional $\varphi$ by $\varphi$ also. These +operators are $\tau$-measurable (see the next section), where $\tau$ is the canonical trace \ No newline at end of file diff --git a/samples/texts/732704/page_11.md b/samples/texts/732704/page_11.md new file mode 100644 index 0000000000000000000000000000000000000000..d3e23364dd74207356d2cc59f0c15685ea7420bb --- /dev/null +++ b/samples/texts/732704/page_11.md @@ -0,0 +1,23 @@ +on the core, and so they generate a certain graded $*-algebra$: positive elements of $L^p(M)$ are defined to be operators of the form $\varphi^{1/p}$. The basic development of this theory can be found in [Te]; our choice of notation is influenced by [Y], where it is called a *modular algebra*. + +The composition of two $L^2$ operators is an $L^1$ operator, and it turns out that (5.1) can be rigorously justified [S] as an operator equation. (This is not automatic, as operators like $\varphi^{-1/2}$ are not $\tau$-measurable and require more delicate arguments.) In fact, there is nothing sacred about half-densities. With the recent development of noncommutative $L^p$ modules [JS], one can allow relative tensor products to be bifunctors on Right $L^p(M)$ $\times$ Left $L^q(M)$, with range in a certain $L^r$ space. The mapping is densely-defined by + +$$ \xi \otimes_{\varphi} \eta \triangleq (\xi \varphi^{\frac{1}{r}-\frac{1}{p}-\frac{1}{q}}) \eta. $$ + +In the case of an RTP of $L^\infty$ modules (or more generally, Hilbert C*-modules), the middle term is trivial and there is no change of density. This explains why there are no domain issues in defining a vector-valued RTP of Hilbert C*-modules [R]. + +Let us mention that the recent theory of operator bimodules, in which vectors can be realized as *bounded* operators, allows a variety of relative tensor products over C*-algebras [AP]. This can be naturally viewed as a generalization of the theory of Banach space tensor products, which corresponds to a C*-algebra of scalars. + +**7. Preclosedness of the map** $(\xi, \eta) \mapsto \xi \otimes_{\varphi} \eta$ + +Our purpose in this final section is to study when the relative tensor map is *preclosed*. This is a weaker condition than that of Falcone, who studied (effectively) when the map was *bounded*. We begin with a base case: a factor, two standard modules, and a simple product. With the usual notation $B_\varphi$ for $\mathcal{D}(L^2(M), \varphi)$, the relevant map is + +$$ B_{\varphi} \times L^2(M) \ni (\xi, \eta) \mapsto \xi \otimes_{\varphi} \eta \in L^2(M). $$ + +This is bilinear: we take “preclosed” to mean that if $\xi_n \to \xi \in B_\varphi$, $\eta_n \to \eta$, $\xi_n \otimes_\varphi \eta_n \to \zeta$, then necessarily $\zeta = \xi \otimes_\varphi \eta$. We will also consider several variations: changing the domain to an algebraic tensor product, allowing non-factors, and allowing arbitrary modules. + +Readers unfamiliar with von Neumann algebras will find this section more technical, and any background we can offer here is sure to be insufficient. Still, we introduce the necessary concepts in hopes that the non-expert will at least find the statements of the theorems accessible. + +A weight is an “unbounded positive linear functional”: a linear map from $M_+$ to $[0, +\infty]$. We will always assume that weights are normal, so $x_\alpha \nearrow x$ strongly $\Rightarrow \varphi(x_\alpha) \nearrow \varphi(x)$; and semifinite, so $\{x \in M_+ | \varphi(x) < \infty\}$ is $\sigma$-weakly dense in $M_+$. We can still define RTPs for faithful weights, but now $B_\varphi = \{x\varphi^{1/2} | \varphi(x^*x) < \infty\} \subset L^2(M)$. For details of the representations associated to weights, see [T2]. + +A weight $\tau$ which satisfies $\tau(xy) = \tau(yx)$ on its domain of definition will be called a *trace* (more properly called a “tracial weight”). An algebra which admits a faithful trace $\tau$ is semifinite; if in addition we can have $\tau(1) < \infty$, it is finite. This facilitates the following classification of factors: a factor with $n$ orthogonal \ No newline at end of file diff --git a/samples/texts/732704/page_12.md b/samples/texts/732704/page_12.md new file mode 100644 index 0000000000000000000000000000000000000000..83df56c2c0924ca6bb3c32b97b655badb60e7e48 --- /dev/null +++ b/samples/texts/732704/page_12.md @@ -0,0 +1,68 @@ +minimal projections is type $I_n$ (possibly $n = \infty$); a semifinite factor without minimal +projections is type $\Pi_1$ if finite and type $\Pi_\infty$ if not; a factor which is not semifinite +is type III. The reader will note that this refines our previous definitions of type, +as a trace is exactly the object which orders the equivalence classes of projections. +Obviously, there is much more to be said, and most of it can be found in [T1]. + +For a faithful trace $\tau$ on semifinite $\mathcal{M}$, it is useful to consider the $\tau$-measure +topology [N]. This is a uniform topology with neighborhoods of 0 given by + +$$ +N(\delta, \epsilon) = \{x \in \mathcal{M} \mid \exists p \in \mathcal{P}(\mathcal{M}) \text{ with } \tau(p^\perp) < \delta, \|xp\| < \epsilon\}. +$$ + +The closure of $\mathcal{M}$ in this topology can be identified as a space of closed, densely- +defined operators affiliated with $\mathcal{M}$. It is denoted $\mathfrak{M}(\mathcal{M})$ and actually forms a +*-algebra to which $\tau$ extends naturally. (The $\tau$-measurability of an operator $T$ is +equivalent to the assertion that $\tau(e(\lambda)^\perp) < \infty$ for some spectral projection $e(\lambda)$ +of $|T|$, so we get that $\mathfrak{M}(\mathcal{M}) = \mathcal{M}$ if $\mathcal{M}$ is atomic.) It follows from modular +theory that every weight on $(\mathcal{M}, \tau)$ is of the form $\tau_h = “\tau(h·)”$ for some closed, +densely-defined, and positive operator $h$. In case $h$ is not $\tau$-measurable, this is to +be interpreted as + +$$ +\lim_{\epsilon \to 0} \tau(h_{\epsilon}^{1/2} \cdot h_{\epsilon}^{1/2}), \text{ where } h_{\epsilon} = h(1 + \epsilon h)^{-1}. +$$ + +Finally, the presence of a faithful trace $\tau$ allows us to introduce the spaces + +$$ +L^p(M, \tau) = \{ T \in \mathfrak{M}(M) \mid \tau(|T|^p) = \|T\|^p < \infty \}, +$$ + +which are antecedent to Haagerup's. Exposition can be found in [N]. Here we will +only need $L^2(M, \tau)$, which is a standard form and in particular isomorphic as a left +module to any faithful GNS representation $\mathfrak{H}_\varphi$. It is easy to check that the norm +topology in $L^2(M, \tau)$ is stronger than the $\tau$-measure topology. + +THEOREM 7.1. Let $\mathcal{M}$ be a factor. The map + +$$ +(7.1) \qquad \mathfrak{B}_{\varphi} \times L^2(\mathcal{M}) \to L^2(\mathcal{M}): \quad (\xi, \eta) \mapsto \xi \otimes_{\varphi} \eta +$$ + +is preclosed iff $\mathcal{M} = (\mathcal{M}, \tau)$ is semifinite and $h^{-1}$ is $\tau$-measurable, where $\varphi = \tau_h$. + +PROOF. The proof is by consideration of cases. + +$\mathcal{M}$ is type III: Choose a projection $e_0$ so $\varphi(e_0) = c < \infty$. Find orthogonal projections $f_1, g_1$ with $e_0 = f_1 + g_1$. Set $e_1 = f_1$ if $\varphi(f_1) \le \varphi(g_1)$ and $e_1 = g_1$ otherwise. Continuing in this fashion gives a sequence of projections, necessarily $\sigma$-finite, with $\varphi(e_n) \le c/2^n$. Thus all the $e_n$ are Murray-von Neumann equivalent, and there exist partial isometries $v_n$ with $v_n^*v_n = e_0$, $v_n v_n^* = e_n$. Implementing multiplication, + +$$ +(n v_n^* \varphi^{1/2}, (1/n) v_n \varphi^{1/2}) \mapsto v_n^* v_n \varphi^{1/2} = e_0 \varphi^{1/2}. +$$ + +But + +$$ +\[ +\|\nu v_n^* \varphi^{1/2}\|^2 = n^2(\varphi(e_n)) \to 0; +\] +$$ + +$$ +\[ +\|(1/n)v_n\varphi^{1/2}\|^2 = (1/n^2)(\varphi(e_0)) \to 0; +\] +$$ + +thus the multiplication is not preclosed. \ No newline at end of file diff --git a/samples/texts/732704/page_13.md b/samples/texts/732704/page_13.md new file mode 100644 index 0000000000000000000000000000000000000000..6dfd117b483aeb3bcd65a07a693a0d9d3706f546 --- /dev/null +++ b/samples/texts/732704/page_13.md @@ -0,0 +1,48 @@ +$\mathcal{M} = (\mathcal{M}, \tau)$ is *semifinite* and $h^{-1}$ is not $\tau$-measurable: First note that the measurability of $h^{-1}$ does not depend on the choice of $\tau$. Writing $h = \int \lambda de(\lambda)$, the hypothesis is that $\tau(e(\lambda)) = \infty$, $\forall \lambda$. Choose a projection $e_0$ with $\varphi(e_0) < \infty$ and $\tau(e_0) < \infty$. Then $e(1/n^3)$ has a subprojection $e_n$ which is equivalent to $e_0$. The above construction again shows that the map is not preclosed, except that + +$$ ||nv_n^*\varphi^{1/2}||^2 = n^2(\varphi(e_n)) = n^2(\tau(he_n)) \le (1/n)\tau(e_n) \to 0. $$ + +$\mathcal{M} = (\mathcal{M}, \tau)$ is semifinite and $h^{-1}$ is $\tau$-measurable: We assume + +$$ (7.2) \qquad x_n\varphi^{1/2} \to x\varphi^{1/2}, \quad \eta_n \to \eta, \quad x_n\eta_n \to \zeta, $$ + +and want to show $\zeta = x\eta$. Set + +$$ n = \{x \in \mathcal{M} \mid \overline{xh^{1/2}} \in L^2(\mathcal{M}, \tau)\}; \quad n_{\varphi} = \{x \in \mathcal{M} \mid \varphi(x^*x) < \infty\}, $$ + +both of which are strongly dense in $\mathcal{M}$. (The bar stands for graph closure.) Then + +$$ \pi : n\varphi^{1/2} \to L^2(\mathcal{M}, \tau); \quad x\varphi^{1/2} \mapsto \overline{xh^{1/2}} $$ + +densely defines a left module Hilbert space isomorphism from $\mathfrak{H}_{\varphi}$ to $L^2(\mathcal{M}, \tau)$; denote its extension by $\pi$ as well. Recalling that $h^{-1/2}$ is $\tau$-measurable by assumption, + +$$ \rho : n_{\varphi} \to \mathfrak{M}(\mathcal{M}); \quad x \mapsto \pi(x\varphi^{1/2})h^{-1/2} $$ + +is well-defined and the identity map on $\mathfrak{n}$. It is also strong-measure continuous: + +$$ +\begin{align*} +& x_{\alpha} \xrightarrow{s} x \Rightarrow x_{\alpha}\varphi^{1/2} \xrightarrow{\mathfrak{H}_{\varphi}} x\varphi^{1/2} \Rightarrow \pi(x_{\alpha}\varphi^{1/2}) \xrightarrow{L^2} \pi(x\varphi^{1/2}) \\ +& \Rightarrow \pi(x_{\alpha}\varphi^{1/2}) \xrightarrow{m} \pi(x\varphi^{1/2}) \Rightarrow \pi(x_{\alpha}\varphi^{1/2})h^{-1/2} \xrightarrow{m} \pi(x\varphi^{1/2})h^{-1/2}, +\end{align*} +$$ + +where we used that multiplication is jointly continuous in the measure topology. We may conclude that $\rho$ is the identity on all of $n_\varphi$. + +Implementing the isomorphism $\pi$, (7.2) becomes + +$$ (7.3) \qquad \pi(x_n\varphi^{1/2}) \to \pi(x\varphi^{1/2}), \quad \pi(\eta_n) \to \pi(\eta), \quad x_n\pi(\eta_n) \to \pi(\zeta). $$ + +The convergences in (7.3) are also in measure; by the foregoing discussion we have + +$$ x_n\pi(\eta_n) = \pi(x_n\varphi^{1/2})h^{-1/2}\pi(\eta_n) \to \pi(x\varphi^{1/2})h^{-1/2}\pi(\eta) = x\pi(\eta) $$ + +in measure as well. + +The measure topology is also Hausdorff, so $\pi(\zeta) = x\pi(\eta)$ and therefore $\zeta = x\eta$. $\square$ + +The map $\rho$ suggests a schematic recovery of the “operators” in $\mathfrak{H}_{\varphi}$: + +$$ (7.4) \qquad \{\pi_{\ell}^{\varphi}(\xi) | \xi \in \mathfrak{H}_{\varphi}\} = L^2(\mathcal{M}, \tau)h^{-1/2}. $$ + +Such operators are densely-defined but in general not closable (or may have multiple closed extensions [Sk]). Not surprisingly, then, the right-hand side of (7.4) may be only formal. The condition on $h$ in Theorem 7.1 makes the equality (7.4) rigorous, as the products on the right-hand side are well-defined $\tau$-measurable operators. Note that $h$ and $h^{-1}$ are automatically $\tau$-measurable when $\mathcal{M}$ is finite, and in \ No newline at end of file diff --git a/samples/texts/732704/page_14.md b/samples/texts/732704/page_14.md new file mode 100644 index 0000000000000000000000000000000000000000..9779ff5c2b81b7e6ed6519149822c2f62a6de7ae --- /dev/null +++ b/samples/texts/732704/page_14.md @@ -0,0 +1,56 @@ +this case all multiplications and isomorphisms between GNS representations stay +within $\mathfrak{M}(\mathcal{M})$, and all operators are closed - a version, somewhat oblique, of the +$T$-theorem of Murray and von Neumann. + +**THEOREM 7.2.** Let $\mathcal{M}$ be a factor, and consider $\mathfrak{B}_{\phi} \otimes_{alg} L^2(\mathcal{M})$ as a subspace of the Hilbert space tensor product $L^2(\mathcal{M}) \otimes L^2(\mathcal{M})$. The linear map + +$$ +(7.5) \quad \mathfrak{B}_{\varphi} \otimes_{alg} L^2(\mathcal{M}) \to L^2(\mathcal{M}): \quad \sum \xi_n \otimes \eta_n \mapsto \sum \xi_n \otimes_{\varphi} \eta_n +$$ + +is preclosed iff $\mathcal{M} = (\mathcal{M}, \tau)$ is atomic and $\tau(h^{-1}) < \infty$, where $\varphi = \tau_h$. In this case it is actually a bounded map, with norm $\tau(h^{-1})^{1/2}$. + +PROOF. When $\mathcal{M}$ is type III, the map is not preclosed by the previous theorem. +We will therefore fix a trace $\tau$, set $\varphi = \tau_h$, use the decomposition $h = \int \lambda de(\lambda)$, +and view all vectors as elements of $L^2(\mathcal{M}, \tau)$. (When $\mathcal{M}$ is type I, we assume that +$\tau$ is normalized so that $\tau(p) = 1$ for any minimal projection $p$.) The rest of the +proof is again by cases. + +$\mathcal{M}$ is type II: Choose $p < e(\lambda)$ for some $\lambda$ with $\tau(p) = c < \infty$. For each $k$, +break up $p$ into equivalent orthogonal projections as $\sum_{n=1}^k p_n^k$. Consider the tensors + +$$ +T_k = \sum p_n^k h^{1/2} \otimes p_n^k \mapsto \sum p_n^k = p. +$$ + +Since the $p_n^k$ are orthogonal, + +$$ +||T_k||^2 = \sum \tau(p_n^k h) \tau(p_n^k) \leq \sum \left(\frac{\lambda c}{k}\right) \left(\frac{c}{k}\right) = \frac{\lambda c^2}{k} \to 0 +$$ + +and the map is not preclosed. + +$\mathcal{M}$ is type $I_\infty$ and $\tau(e(\lambda)) = \infty$ for some $\lambda$: Fix an orthogonal sequence of minimal projections $\{p_n\}$, $p_n < e(\lambda)$. The equivalence gives partial isometries with $v_n^* v_n = p_1$, $v_n v_n^* = p_n$. Then + +$$ +T_k = \sum_{n=1}^{k} \frac{1}{k} (v_n^* h^{1/2} \otimes v_n) \mapsto \sum_k \frac{1}{k} v_n^* v_n = p_1; +$$ + +$$ +||T_k||^2 = \frac{1}{k^2} \sum \tau(p_n h) \tau(p_1) \leq \frac{1}{k^2} \sum \lambda = \frac{\lambda}{k} \to 0 +$$ + +and the map is not preclosed. + +In the only remaining situation, $\mathcal{M}$ is type I and $h$ is diagonalizable. Let $\{\lambda_n\}$ be the eigenvalues (with repetition), arranged in nondecreasing order. We will write all matrices with respect to the basis of eigenvectors. + +If $s_k = \sum_{n=1}^{k} \frac{1}{\lambda_n} / \infty$: Consider + +$$ +(7.6) \quad T_k = \sum_{n=1}^{k} \frac{1}{s_k \lambda_n} e_{1n} h^{1/2} \otimes e_{n1} \mapsto \frac{1}{s_k} \sum_k \frac{1}{\lambda_n} e_{11} = e_{11}; +$$ + +$$ +||T_k||^2 = \sum_k \frac{1}{s_k^2 \lambda_n^2} \tau(e_{nn}h) \tau(e_{11}) = \frac{1}{s_k^2} \sum_k \frac{1}{\lambda_n} = \frac{1}{s_k} \to 0 +$$ \ No newline at end of file diff --git a/samples/texts/732704/page_15.md b/samples/texts/732704/page_15.md new file mode 100644 index 0000000000000000000000000000000000000000..066aa05ea8d1950774036a3f33c458ceb252509d --- /dev/null +++ b/samples/texts/732704/page_15.md @@ -0,0 +1,39 @@ +and the map is not preclosed. + +If $s_k = \sum_{n=1}^k \frac{1}{\lambda_n} \wedge C < \infty$; that is, $\tau(h^{-1}) < \infty$: We show that the map is bounded on finite tensors of the form $T = \sum_{ij} e_{ij} \otimes y^{ij}$. We have + +$$T \mapsto S = \sum_{ij} e_{ij} h^{-1/2} y^{ij} = \sum_{ij} e_{ij} \lambda_j^{-1/2} y^{ij} = \sum_{ik} \left( \sum_j \lambda_j^{-1/2} y_{jk}^{ij} e_{ik} \right).$$ + +By Cauchy-Schwarz, + +$$ +\begin{align*} +\|S\|^2 &= \sum_{ik} \left| \sum_j \lambda_j^{-1/2} y_{jk}^{ij} \right|^2 \\ +&\le \sum_{ik} \left( \left[ \sum_j \lambda_j^{-1} \right] \left[ \sum_j |y_{jk}^{ij}|^2 \right] \right) \\ +&\le C \sum_{ijk} |y_{jk}^{ij}|^2 \le C \sum_{ijkl} |y_{lk}^{ij}|^2 = C \|T\|^2. +\end{align*} +$$ + +Since such tensors are dense in the Hilbert space tensor product, we may conclude that the norm of the map is $\le C^{1/2}$. But the tensors $T_k$ from (7.6) show that the norm is at least $C^{1/2}$. $\square$ + +We now extend Theorem 7.1 to the non-factor case. A general von Neumann algebra is a direct integral of factors (see [T1] for details), and weights on the algebra disintegrate as well. + +PROPOSITION 7.3. Let $\mathcal{M}$ be a von Neumann algebra with central decomposition $\int_{\Gamma}^{\oplus} \mathcal{M}(\omega)d\mu(\omega)$. The map + +$$ (7.7) \qquad \mathfrak{B}_{\varphi} \times L^2(\mathcal{M}) \to L^2(\mathcal{M}): \quad (\xi, \eta) \mapsto \xi \otimes_{\varphi} \eta $$ + +is preclosed iff $\mathcal{M} = (\mathcal{M}, \tau)$ is semifinite and + +$(\star) h(\omega)^{-1}$ is $\tau(\omega)$-measurable for $\mu$-a.e. $\omega$, where $\varphi = \tau_h$. + +PROOF. If $\mathcal{M}$ contains a summand of type III, the construction from Theorem 7.1, with the added restriction that $f_n$ and $g_n$ are chosen with equal central support, demonstrates that the map is not preclosed. + +If there is a trace $\tau$ for which $\varphi = \tau_h$ and $h^{-1}$ is $\tau$-measurable, then the argument in Theorem 7.1 still shows that the map is preclosed. We will see that this possibility is equivalent to $(\star)$. + +First note that $(\star)$ is independent of the trace chosen, as the choice of a different trace changes a.e. $h(\omega)$ by a constant factor. If $(\star)$ does not hold, fix any trace $\tau$, write $\varphi = \tau_h$, and let $\{e(\lambda)\}$ be the spectral projections of $h$. By hypothesis, we can find a nonzero central projection $z$ with $ze(\lambda)$ a properly infinite projection for all $\lambda$. The second construction of Theorem 7.1 shows that the map is not preclosed, where we choose all $e_n$, including $e_0$, with central support $z$. + +Now suppose that $(\star)$ holds. We may choose a trace $\tau$ which factors as $\tilde{\tau} \circ \Phi$, where $\Phi$ is an extended center-valued trace and $\tilde{\tau}$ is a trace on the center with $\tilde{\tau}(1) < \infty$. Let $h$ and $\{e(\lambda)\}$ be as before. Now by assumption, the function + +$$ z(\omega) = \max\{1/n \mid \tau(\omega)(e(1/n)(\omega)) < 1\} $$ + +is a.e. defined, non-zero, and finite. It is measurable by construction, so $z$ and $z^{-1}$ represent elements of the extended center. Now write $\varphi = (\tau_z)_{z^{-1}h}$. Let $f$ be the \ No newline at end of file diff --git a/samples/texts/732704/page_16.md b/samples/texts/732704/page_16.md new file mode 100644 index 0000000000000000000000000000000000000000..63c8014d2fb1662f663fcb1c723ada46ad7e2c5c --- /dev/null +++ b/samples/texts/732704/page_16.md @@ -0,0 +1,45 @@ +spectral projection of $z^{-1}h$ for [0,1]. We have $f(\omega) = e(z(\omega))(w)$, so $\tau(\omega)(f(w)) < 1$. Then $\Phi(f) < 1$, and $\tau_z(f) = \tilde{\tau}(z\Phi(f)) < \tilde{\tau}(1) < \infty$. Since $f$ was a spectral projection of $z^{-1}h$, we conclude that $(z^{-1}h)^{-1}$ is $\tau_z$-measurable. +□ + +PROPOSITION 7.4. Let $\mathcal{M}$ be a factor with left module $\mathcal{M}\mathfrak{R}$ and right module $\mathfrak{H}\mathcal{M}$. The map + +$$ (7.8) \qquad \mathfrak{D}(\mathfrak{H}, \varphi) \times \mathfrak{R} \to \mathfrak{H} \otimes_{\varphi} \mathfrak{R}: \quad (\xi, \eta) \mapsto \xi \otimes_{\varphi} \eta $$ + +is preclosed only under the same conditions as in Theorem 7.1; i.e. $\mathcal{M} = (\mathcal{M}, \tau)$ is semifinite and $h^{-1}$ is $\tau$-measurable, where $\varphi = \tau_h$. + +PROOF. If $\mathcal{M}$ is type III, any separable $L^2$ left or right module is isomorphic to $L^2(\mathcal{M})$, so multiplication is not preclosed. + +Now assume $h^{-1}$ is $\tau$-measurable. When + +$$ \begin{pmatrix} x_1^k \varphi^{1/2} \\ x_2^k \varphi^{1/2} \\ \vdots \end{pmatrix} \xrightarrow{k \to \infty} \begin{pmatrix} x_1 \varphi^{1/2} \\ x_2 \varphi^{1/2} \\ \vdots \end{pmatrix}, \quad (\eta_1^k, \eta_2^k, \dots) \xrightarrow{L^2} (\eta_1, \eta_2, \dots), \quad (x_i^k, \eta_j^k) \xrightarrow{k \to \infty} (\zeta_{ij}), $$ + +we also have $L^2$ convergence in each coordinate. By Theorem 7.1, $\zeta_{ij} = x_i \eta_j$. + +When $h^{-1}$ is not $\tau$-measurable, $\mathcal{M}$ must be $I_\infty$ or $\Pi_\infty$. In this case $\mathcal{M} \simeq M_\infty(\mathcal{M})$, and we do not need row and column matrices: $\mathfrak{H} \simeq q_1 L^2(\mathcal{M})$ and $\mathfrak{R} \simeq L^2(\mathcal{M})$ if appropriate projections $q_1, q_2$. Fix equivalent finite projections $f_1 \le q_1, f_2 \le q_2$ with $v^*v = f_1, vv^* = f_2$. By assumption $e(1/n^3)$ is infinite for all $n$; let $g_n$ be a subprojection equivalent to the $f_i$ with $v_{in}^*v_{in} = f_i$, $v_in v_{in}^* = g_n$. Then + +$$ (nv_{1n}^*\varphi^{1/2}, (1/n)v_{2n}) \mapsto v^*, $$ + +$$ ||nv_{1n}^*\varphi^{1/2}||^2 = n^2\tau(g_n h) \le (1/n)\tau(f_1) \to 0, $$ + +$$ ||(1/n)v_{2n}||^2 = (1/n^2)\tau(f_2) \to 0, $$ + +and the map is not preclosed. +□ + +References + +[A1] C. Anantharaman-Delaroche, *Atomic correspondences*, Indiana Univ. Math. J. **42** (1993), no. 2, 505-531. + +[A2] C. Anantharaman-Delaroche, *Amenability of bimodules and operator algebras*, in *Operator algebras and quantum field theory*, Internat. Press, Cambridge, MA, 1997, 225-235. + +[AP] C. Anantharaman-Delaroche and C. Pop, *Relative tensor products and infinite C*-algebras, J. Operator Theory **47** (2002), 389-412. + +[C1] A. Connes, *On the spatial theory of von Neumann algebras*, J. Funct. Anal. **35** (1980), 153-164. + +[C2] A. Connes, *Noncommutative geometry*, Harcourt Brace & Co., San Diego, 1994. + +[F] T. Falcone, *L2-von Neumann modules, their relative tensor products and the spatial de-rivative*, Illinois J. Math. **44** (2000), no. 2, 407-437. + +[H] U. Haagerup, *Lp-spaces associated with an arbitrary von Neumann algebra*, Algèbres d'opérateurs et leurs applications en physique mathématique, CNRS **15** (1979), 175-184. + +[Jol] P. Jolissaint, *Index for pairs of finite von Neumann algebras*, Pac. J. Math. **146** (1990), 43-70. \ No newline at end of file diff --git a/samples/texts/732704/page_17.md b/samples/texts/732704/page_17.md new file mode 100644 index 0000000000000000000000000000000000000000..16a7408996875cf67abfdd51e44a629ce6ecea58 --- /dev/null +++ b/samples/texts/732704/page_17.md @@ -0,0 +1,37 @@ +[J] V. Jones, *Index for subfactors*, Invent. Math. **72** (1983), 1-25. + +[JoS] V. Jones and V. Sunder, *Introduction to subfactors*, London Mathematical Society Lecture Note Series **234**, Cambridge University Press, Cambridge, 1997. + +[JS] M. Junge and D. Sherman, *Noncommutative $L^p$ modules*, J. Operator Theory, to appear. + +[KR] R. Kadison and J. Ringrose, *Fundamentals of the theory of operator algebras I,II*, Graduate Studies in Mathematics **15**, **16**, AMS, Providence, 1997. + +[K] H. Kosaki, *Index theory for operator algebras*, Sugaku Expositions **4** (1991), no. 2, 177-197. + +[N] E. Nelson, *Notes on non-commutative integration*, J. Funct. Anal. **15** (1974), 103-116. + +[Pa] W. Paschke, *Inner product modules over $B$*-algebras, Trans. AMS **182** (1973), 443-468. + +[P] S. Popa, *Correspondences*, notes, 1986. + +[R] M. Rieffel, *Morita equivalence for $C$*-algebras and $W$*-algebras, J. Pure and Appl. Algebra **5** (1974), 51-96. + +[Sa] J.-L. Sauvageot, *Sur le produit tensoriel relatif d'espaces de Hilbert*, J. Operator Theory **9** (1983), 237-252. + +[S] D. Sherman, *Applications of modular algebras*, in preparation. + +[Sk] C. Skau, *Positive self-adjoint extensions of operators affiliated with a von Neumann algebra*, Math. Scand. **44** (1979), 171-195. + +[T1] M. Takesaki, *Theory of Operator Algebras I*, Springer-Verlag, New York, 1979. + +[T2] M. Takesaki, *Theory of Operator Algebras II*, Springer-Verlag, to appear. + +[Te] M. Terp, $L^p$-spaces associated with von Neumann algebras, notes, Copenhagen University, 1981. + +[W] N. E. Wegge-Olsen, *K-theory and $C$*-algebras, Oxford University Press, Oxford, 1993. + +[Y] S. Yamagami, *Algebraic aspects in modular theory*, Publ. RIMS **28** (1992), 1075-1106. + +DEPARTMENT OF MATHEMATICS, UNIVERSITY OF ILLINOIS, URBANA, IL 61801-2975 + +*E-mail address: dasherma@math.uiuc.edu* \ No newline at end of file diff --git a/samples/texts/732704/page_2.md b/samples/texts/732704/page_2.md new file mode 100644 index 0000000000000000000000000000000000000000..9c0746dd213d447ea9d49dd1a7a5ed4cf1696220 --- /dev/null +++ b/samples/texts/732704/page_2.md @@ -0,0 +1,31 @@ +come from a “change of density”. (We say that the *density* of an $L^p$-type space is $1/p$.) Once this is understood, it is easy to handle $L^p$ modules [JS] as well. Modular algebras ([Y], [S]) provide an elegant framework, so we briefly explain their meaning. + +The final section of the paper investigates the question, “When is the map $(\xi, \eta) \mapsto \xi \otimes_\varphi \eta$ preclosed?” This may be considered as an extension of Falcone’s theorem [F], in which he found conditions for the map to be everywhere-defined. We consider a variety of formulations. + +We have tried to make the paper as accessible as possible to non-operator algebraists, especially in the first half. Of course, even at this level many results rely on familiarity with the projection theory of von Neumann algebras; basic sources are [T1], [T2], [KR]. Primary references for RTPs are [Sa], [P], [F], [C2]. + +## 2. Notations and background + +The basic objects of this paper are von Neumann algebras, always denoted here by $\mathcal{M}$, $\mathcal{N}$, or $\mathcal{P}$. These can be defined in many equivalent ways: + +* $C^*$-algebras which are dual spaces. + +* strongly-closed unital $^*$-subalgebras of $B(\mathfrak{H})$. $B(\mathfrak{H})$ is the set of bounded linear operators on a Hilbert space $\mathfrak{H}$; the strong topology is generated by the seminorms $x \mapsto \|x\xi\|$, $\xi \in \mathfrak{H}$; the $^*$ operation is given by the operator adjoint. + +* $^*$-closed subsets of $B(\mathfrak{H})$ which equal their double (iterated) commutant. The commutant of a set $S \subset B(\mathfrak{H})$ is $\{x \in B(\mathfrak{H}) | xy = yx, \forall y \in S\}$. + +As one might guess from the definitions, the study of von Neumann algebras turns on the interplay between algebraic and analytic techniques. + +Finite-dimensional von Neumann algebras are direct sums of full matrix alge- +bras. At the other extreme, commutative von Neumann algebras are all of the form +$L^\infty(X, \mu)$ for some measure space $(X, \mu)$, so the study of general von Neumann al- +gebras is considered “noncommutative measure theory.” Based on this analogy, the + unique) predual $\mathcal{M}_*$ of $\mathcal{M}$ is called $L^1(\mathcal{M})$; it is the set of normal (= continuous + in yet another topology, the $\sigma$-weak) linear functionals on $\mathcal{M} \subset B(\mathfrak{H})$, and can be + thought of as “noncommutative countably additive measures”. A functional $\varphi$ is + positive when $x > 0 \Rightarrow \varphi(x) \ge 0$; the set of positive normal functionals is denoted + $\mathcal{M}_+^{\cdot}$. The support $\text{s}(\varphi)$ of a positive normal linear functional $\varphi$ is the smallest pro- + jection $q \in \mathcal{M}$ with $\varphi(1-q) = 0$. So if $\mathcal{M}$ is abelian, $\varphi$ corre! sp! onds to a measure + and $q$ is the (indicator function of the) usual support. + +For simplicity, all modules in this paper are separable Hilbert spaces (except in Section 6), all algebras have separable predual, all linear functionals are normal, and all representations are normal and nondegenerate ($\mathcal{M}\mathfrak{H}$ or $\mathfrak{H}\mathcal{M}$ is all of $\mathfrak{H}$). Two projections $p, q$ in a von Neumann algebra are said to be (Murray-von Neumann) equivalent if there exists $v \in \mathcal{M}$ with $v^*v = p, vv^* = q$. Such an element $v$ is called a partial isometry, and we think of $p$ and $q$ as being “the same size”. Subscripts are used to represent actions, so $\mathcal{X}_{\mathcal{M}}$ indicates that $\mathcal{X}$ is a right $\mathcal{M}$-module, i.e. a representation of the opposite algebra $\mathcal{M}^{\text{o.p.}}$. It is implicit in the term “bimodule”, or in the notation $\mathcal{M}\mathfrak{H}_{\mathcal{N}}$, that the two actions commute. The phrase “left (resp. right) action of” is frequently ! ab! breviated to $L$ (resp. $R$) for operators or entire algebras, so that we speak of $L(x)$ or $R(\mathcal{M})$. Finally, we often write $\mathcal{M}_\infty$ for the von Neumann algebra of all bounded operators on a separable infinite-dimensional \ No newline at end of file diff --git a/samples/texts/732704/page_3.md b/samples/texts/732704/page_3.md new file mode 100644 index 0000000000000000000000000000000000000000..0fe0578f71fb68485b9e340a96e63f538519492a --- /dev/null +++ b/samples/texts/732704/page_3.md @@ -0,0 +1,25 @@ +Hilbert space, and $M_{\infty}(\mathcal{M})$ for the von Neumann tensor product $M_{\infty}\bar{\otimes}\mathcal{M}$. One can think of this as the set of infinite matrices with entries in $\mathcal{M}$; we will denote by $e_{ij}$ the matrix unit with 1 in the $ij$ position and 0 elsewhere. + +The (left) representation theory of von Neumann algebras on Hilbert spaces is simple, so we recall it briefly. (Most of this development can be found in Chapters 1 and 2 of [JoS].) First, there is a standard construction, due to Gelfand-Neumark and Segal (abbreviated GNS), for building a representation from $\varphi \in M_{*}^{+}$. To each $x \in \mathcal{M}$ we formally associate the vector $x\varphi^{1/2}$ (various notations are in use, e.g. $\eta_{\varphi}(x)$ or $\Lambda_{\varphi}(x)$, but this one is especially appropriate ([C2] V.App.B, [S])). We endow this set with the inner product + +$$ \langle x\varphi^{1/2}, y\varphi^{1/2} \rangle = \varphi(y^*x), $$ + +and set $\mathfrak{h}_{\varphi}$ to be the closure in the inherited topology, modulo the null space. The left action of $\mathcal{M}$ on $\mathfrak{h}_{\varphi} = \overline{\mathcal{M}}\varphi^{1/2}$ is bounded and densely defined by left composition. + +When $\varphi$ is faithful (meaning $x > 0 \Rightarrow \varphi(x) > 0$), the vector $\varphi^{1/2} = 1\varphi^{1/2}$ is cyclic ($\overline{\mathcal{M}}\varphi^{1/2} = \mathfrak{h}_{\varphi}$) and separating ($x \neq 0 \Rightarrow x\varphi^{1/2} \neq 0$). Now all representations with a cyclic and separating vector are isomorphic - a sort of "left regular representation"; we will denote this by $\mathcal{M}L^2(\mathcal{M})$. It is a fundamental fact that the commutant of this action is antiisomorphic to $\mathcal{M}$, and when we make this identification we call $\mathcal{M}L^2(\mathcal{M})_{\mathcal{M}}$ the standard form of $\mathcal{M}$. If $\varphi$ is not faithful, the GNS construction produces a vector $\varphi^{1/2}$ which is cyclic but not separating, and a representation which is isomorphic to $\mathcal{M}L^2(\mathcal{M})s(\varphi)$ ([T2], Ch. VIII, IX). + +Now let us examine an arbitrary (separable, so countably generated) module $\mathcal{M}\mathfrak{h}$. Following standard arguments (e.g. [T1] I.9), $\mathfrak{h}$ decomposes into a direct sum of cyclic representations $\mathcal{M}(\overline{\mathcal{M}}\xi_n)$, each of which is isomorphic to the GNS representation for the associated vector state $\omega_{\xi_n} (= \langle \cdot, \xi_n \rangle)$. With $q_n = s(\omega_{\xi_n})$, we have + +$$ \mathcal{M}\mathfrak{h} \simeq \bigoplus \overline{\mathcal{M}}\xi_n \simeq \bigoplus \mathcal{M}\mathfrak{h}_{\omega_{\xi_n}} \simeq \bigoplus \mathcal{M}L^2(\mathcal{M})q_n. $$ + +(Here and elsewhere, "$\simeq$" means a unitary equivalence of (bi)modules.) Since this is a left module, it is natural to write vectors as rows with the nth entry in $L^2(\mathcal{M})q_n$: + +$$ (2.1) \quad \mathfrak{h} \simeq (L^2(\mathcal{M})q_1 L^2(\mathcal{M})q_2 \cdots) \simeq (L^2(\mathcal{M}) L^2(\mathcal{M}) \cdots)(\sum q_n \otimes e_{nn}). $$ + +We will call such a decomposition a *row representation* of $\mathcal{M}\mathfrak{h}$. Here $e_{nn}$ are diagonal matrix units in $M_{\infty}$, so $(\sum q_n \otimes e_{nn})$ is a diagonal projection in $M_{\infty}(\mathcal{M})$. The left action of $\mathcal{M}$ is, of course, matrix multiplication (by $1 \times 1$ matrices) on the left. + +The module $(L^2(\mathcal{M})L^2(\mathcal{M})\cdots)$ will be denoted $R^2(\mathcal{M})$ (for "row"). Since the standard form behaves naturally with respect to restriction - $L^2(qNq) \simeq qL^2(N)q$ as bimodules - it follows that $L^2(M_{\infty}(\mathcal{M}))$ is built as infinite matrices over $L^2(\mathcal{M})$ (see (3.3)). Thus $R^2(\mathcal{M})$ can be realized as $e_{11}L^2(M_{\infty}(\mathcal{M}))$. + +**PROPOSITION 2.1.** Any countably generated left representation of $\mathcal{M}$ on a Hilbert space is isomorphic to $R^2(\mathcal{M})q$ for some diagonal projection $q \in M_{\infty}(\mathcal{M})$. Any projection in $M_{\infty}(\mathcal{M})$, diagonal or not, defines a module in this way, and two such modules are isomorphic exactly when the projections are equivalent. In fact + +$$ (2.2) \quad \operatorname{Hom}(\mathcal{M}R^2(\mathcal{M})q_1, \mathcal{M}R^2(\mathcal{M})q_2) = R(q_1 M_{\infty}(\mathcal{M})q_2). $$ \ No newline at end of file diff --git a/samples/texts/732704/page_4.md b/samples/texts/732704/page_4.md new file mode 100644 index 0000000000000000000000000000000000000000..d5340c2437c7d91212f5e59fd18a88a784453d9a --- /dev/null +++ b/samples/texts/732704/page_4.md @@ -0,0 +1,19 @@ +So isomorphism classes correspond to equivalence classes of projections in $M_\infty(\mathcal{M})$, which is the monoid $V(M_\infty(\mathcal{M}))$ in K-theoretic language [W-O]. The direct sum of isomorphism classes of modules corresponds to the sum of orthogonal representatives in $V(M_\infty(\mathcal{M}))$, giving a monoidal equivalence. + +We denote the category of separable left $\mathcal{M}$-modules by Left $L^2(\mathcal{M})$. For us, the most important consequence of (2.2) is that + +$$ (2.3) \qquad \mathcal{L}_{(\mathcal{M}R^2(\mathcal{M}))} = R(qM_{\infty}(\mathcal{M})q), $$ + +where “$\mathcal{L}$” stands for the commutant of the $\mathcal{M}$-action. (In particular, the case $q = e_{11}$ is just the standard form.) The algebra $qM_\infty(\mathcal{M})q$ is called an *amplification* of $\mathcal{M}$, being a generalization of a matrix algebra with entries in $\mathcal{M}$. Of course everything above can be done for right modules - the relevant abbreviations are $C^2(\mathcal{M})$, for “column,” and *Right* $L^2(\mathcal{M})$. + +**Example.** Suppose $\mathcal{M} = M_3(\mathbb{C})$. In this case the standard form may be taken as + +$$ M_3 L^2(M_3)_{M_3}; \quad L^2(M_3) \simeq (M_3, < \cdot, \cdot >), \text{ where } = \operatorname{Tr}(y^*x). $$ + +Note that this norm, called the Hilbert-Schmidt norm, is just the $\ell^2$ norm of the matrix entries, and that the left and right multiplicative actions are commutants. (If we had chosen a nontracial positive linear functional, we would have obtained an isomorphic bimodule with a “twisted” right action... this is inchoate Tomita-Takesaki theory.) The module $R^2(M_3)$ is $M_{3\times\infty}$, again with the Hilbert-Schmidt norm, and the commutant is $M_\infty(M_3) \simeq M_\infty$. According to Proposition 2.1, isomorphism classes of left $M_3$-modules should be parameterized by equivalence classes of projections in $M_\infty$. These are indexed by their rank $n \in (\mathbb{Z}_+ \cup \{\infty\})$; the corresponding isomorphism class of modules has representative $M_{3\times n}$. In summary, we have learned that any left representation of $M_3$ on a Hilbert space is isomorphic to some number of copies of $\mathbb{C}^3$. The same argument shows that $V(M_\infty!(M!_k)) \simeq (\mathbb{Z}_+ \cup \{\infty\})$ for any $k$. + +Properties of the monoid $V(M_\infty(\mathcal{M}))$ determine the so-called *type* of the algebra. For a factor (a von Neumann algebra whose center is just the scalars), there are only three possibilities: $(\mathbb{Z}_+ \cup \infty)$, $(\mathbb{R}_+ \cup \infty)$, and $\{0, +\infty\}$. These are called types I, II, III, respectively; a fuller discussion is given in Section 7. + +### 3. Algebraic approaches to RTPs + +When $R$ is a ring, the algebraic $R$-relative tensor product is the functor, co-variant in both variables, which maps a right $R$-module $A$ and left $R$-module $B$ to the vector space $(A \otimes_{alg} B)/N$, where $N$ is the subspace generated algebraically by tensors of the form $ar \otimes b - a \otimes rb$. In functional analysis, where spaces are usually normed and infinite-dimensional, one obvious amendment is to replace vector spaces with their closures. But in the context of Hilbert modules over a von Neumann algebra $\mathcal{M}$, this is still not enough. Surprisingly, a result of Falcone ([F], Theorem 3.8) shows that if the RTP $L^2(\mathcal{M}) \otimes_\mathcal{M} L^2(\mathcal{M})$ is the closure of a continuous (meaning $\|I(\xi \otimes \eta)\| < C\|\xi\|^\alpha\|\eta\|$) nondegenerate image of the algebraic $\mathcal{M}$-relative tensor product, $\mathcal{M}$ must be *atomic*, i.e. $\mathcal{M} \simeq \bigoplus_n B(\mathfrak{n}_n)$. We will discuss the analytic obstruction further in Section 5. For now, we take Falcone's theorem as a directive: do not look for a map which is defined for every pair of vectors. If we give up completely on a vector-level construction, we can at least make the functorial \ No newline at end of file diff --git a/samples/texts/732704/page_5.md b/samples/texts/732704/page_5.md new file mode 100644 index 0000000000000000000000000000000000000000..d36029f42825abb84c6c857ff361fe45c198ed90 --- /dev/null +++ b/samples/texts/732704/page_5.md @@ -0,0 +1,35 @@ +**DEFINITION 3.1 (Sa).** Given a von Neumann algebra $\mathcal{M}$, a relative tensor product is a functor, covariant in both variables, + +$$ (3.1) \qquad \text{Right } L^2(\mathcal{M}) \times \text{Left } L^2(\mathcal{M}) \to \text{Hilbert} : (\mathfrak{H}, \mathfrak{K}) \mapsto \mathfrak{H} \otimes_\mathcal{M} \mathfrak{K}, $$ + +which satisfies + +$$ (3.2) \qquad L^2(\mathcal{M}) \otimes_{\mathcal{M}} L^2(\mathcal{M}) \simeq L^2(\mathcal{M}) $$ + +as bimodules. + +Although at first glance this definition seems broad, in fact we see in the next proposition that there is exactly one RTP functor (up to equivalence) for each algebra. The reader is reminded that functoriality implies a mapping of intertwiner spaces as well, so it is enough to specify the map on representatives of each isomorphism class. In particular we have the bimodule structure + +$$ \mathcal{L}(\mathfrak{H}_\mathcal{M})(\mathfrak{H} \otimes_\mathcal{M} \mathfrak{K})\mathcal{L}(\mathcal{M}\mathfrak{R}). $$ + +**PROPOSITION 3.2.** Let $\mathfrak{H} \simeq pC^2(\mathcal{M}) \in \text{Right } L^2\text{Mod}(\mathcal{M})$ and $\mathfrak{K} \simeq R^2(\mathcal{M})q \in \text{Left } L^2\text{Mod}(\mathcal{M})$ for some projections $p, q \in M_\infty(\mathcal{M})$. Then + +$$ \mathfrak{H} \otimes_{\mathcal{M}} \mathfrak{K} \simeq p L^2(M_{\infty}(\mathcal{M}))q $$ + +with natural action of the commutants. + +**PROOF.** By implementing an isomorphism, we may assume that the projections are diagonal: $p = \sum p_i \otimes e_{ii}$, $q = \sum q_j \otimes e_{jj}$. Using (3.2) and functoriality, we have the bimodule isomorphisms + +$$ \begin{align*} \mathfrak{H} \otimes_{\mathcal{M}} \mathfrak{K} &\simeq (\oplus p_i L^2(\mathcal{M})) \otimes_{\mathcal{M}} (\oplus L^2(\mathcal{M})q_j) \\ &\simeq \bigoplus_{i,j} p_i L^2(\mathcal{M}) \otimes_{\mathcal{M}} L^2(\mathcal{M})q_j \simeq \bigoplus_{i,j} p_i L^2(\mathcal{M})q_j \simeq p L^2(M_{\infty}(\mathcal{M}))q. \end{align*} $$ + +$\square$ + +Visually, + +$$ (3.3) \qquad \left( (p) \begin{pmatrix} L^2(\mathcal{M}) \\ L^2(\mathcal{M}) \\ \vdots \end{pmatrix}, (L^2(\mathcal{M}) L^2(\mathcal{M}) \cdots) (q) \right) \mapsto (p) \begin{pmatrix} L^2(\mathcal{M}) & L^2(\mathcal{M}) & \cdots \\ L^2(\mathcal{M}) & L^2(\mathcal{M}) & \cdots \\ \vdots & \vdots & \ddots \end{pmatrix} (q), $$ + +where of course the $\ell^2$ sums of the norms of the entries in these matrices are finite. + +After making the categorical definition above, Sauvageot immediately noted that it gives us no way to perform computations. We will turn to his analytic construction in Section 5; here we discuss an approach to bimodules and RTPs due to Connes. In his terminology a bimodule is called a correspondence. (The best references known to the author are [C2] and [P], but there was an earlier unpublished manuscript which is truly the source of Connes fusion.) + +Consider a correspondence $\mathcal{M}\mathfrak{H}\mathcal{N}$. Choosing a row representation $R^2(\mathcal{M})q$ for $\mathfrak{H}$, we know that the full commutant of $L(\mathcal{M})$ is isomorphic to $R(qM_\infty(\mathcal{M})q)$. This gives us a unital injective *-homomorphism $\rho: \mathcal{N} \hookrightarrow qM_\infty(\mathcal{M})q$, and from the map $\rho$ one can reconstruct the original bimodule (up to isomorphism) as $\mathcal{M}(R^2(\mathcal{M})q)_{\rho(\mathcal{N})}$. \ No newline at end of file diff --git a/samples/texts/732704/page_6.md b/samples/texts/732704/page_6.md new file mode 100644 index 0000000000000000000000000000000000000000..874321edae273ec54f88bf48e21230b4e4d88280 --- /dev/null +++ b/samples/texts/732704/page_6.md @@ -0,0 +1,45 @@ +What if we had chosen a different row representation $R^2(\mathcal{M})q'$ and obtained +$\rho' : \mathcal{N} \to q'M_\infty(\mathcal{M})q'$? By Proposition 2.1, the module isomorphism + +$$ +(3.4) \qquad \mathcal{M}R^2(\mathcal{M})q \simeq \mathcal{M}R^2(\mathcal{M})q' +$$ + +is necessarily given by the right action of a partial isometry $v$ between $q$ and $q'$ in $M_\infty(\mathcal{M})$. Then $\rho$ and $\rho'$ differ by an inner perturbation: $\rho(x) = v^*\rho'(x)v$. We conclude that the class of $\mathcal{M}-\mathcal{N}$ correspondences, modulo isomorphism, is equivalent to the class of unital injective *-homomorphisms from $\mathcal{N}$ into an amplification of $\mathcal{M}$, modulo inner perturbation. (These last are called sectors in subfactor theory.) The distinctions between bimodules, morphisms, and their appropriate equivalence classes are frequently blurred in the literature; our convention here is to use the term “correspondence” to mean a representative *-homomorphism for a bimodule. Notice that a unital inclusion $\mathcal{N} \subset \mathcal{M}$ gives the bimodule $\mathcal{M}L^2(\mathcal{M})_\mathcal{N}$. + +The RTP of correspondences is extremely simple. + +PROPOSITION 3.3. Consider bimodules $\mathcal{M}\mathfrak{H}_\mathcal{N}$ and $\mathcal{N}\mathfrak{K}_P$ coming from correspondences $\rho_1 : \mathcal{N} \hookrightarrow q\mathcal{M}_\infty(\mathcal{M})q$ and $\rho_2 : P \hookrightarrow q'\mathcal{M}_\infty(\mathcal{N})q'$. The bimodule $\mathcal{M}(\mathfrak{H} \otimes_{\mathcal{N}\mathfrak{K}} \mathfrak{K})_P$ is the correspondence $\rho_1 \circ \rho_2$, where we amplify $\rho_1$ appropriately. + +We pause to mention that it is also fruitful to realize correspondences in terms +of completely positive maps. We shall have nothing to say about this approach; +the reader is referred to [P] for basics or [A2] for a recent investigation. + +**4. Applications to Morita equivalence and index** + +An invertible *-functor from *Left L*²Mod(*N*) to *Left L*²Mod(*M*) is called a *Morita equivalence* [R]. Here a *-functor* is a functor which commutes with the adjoint operation at the level of morphisms. One way to create *-functors is the following: to the bimodule *M**H,N*, we associate + +$$ +(4.1) \quad F_{\mathfrak{H}} : \operatorname{Left L}^2\operatorname{Mod}(\mathcal{N}) \to \operatorname{Left L}^2\operatorname{Mod}(\mathcal{M}); +$$ + +$\mathcal{N}\mathfrak{K} \mapsto (\mathcal{M}\bar{\mathfrak{H}}\mathcal{N}) \otimes_{\mathcal{N}} (\mathcal{N}\mathfrak{K}) .$ + +The next theorem is fundamental. + +THEOREM 4.1. When $L(M)$ and $R(N)$ are commutants on $\mathfrak{H}$, the RTP functor $F_{\bar{\mathfrak{H}}}$ is a Morita equivalence. Moreover, every Morita equivalence is equivalent to an RTP functor. + +This type of result - the second statement is an operator algebraic analogue of +the Eilenberg-Watts theorem - goes back to several sources, primarily the funda- +mental paper of Rieffel [R]. His investigation was more general and algebraic, and +his bimodules were not Hilbert spaces but rigged self-dual Hilbert C*-modules, fol- +lowing Paschke [Pa]. From a correspondence point of view, rigged self-dual Hilbert +C*-modules and Hilbert space bimodules give the same theory; the equivalence is +discussed nicely in [A1]. (And the former is nothing but an $L^\infty$ version of the +latter, as explained in [JS].) Our Hilbert space approach here is parallel to that of +Sauvageot [Sa], though modeled more on [R], and is streamlined by our standing +assumption of separable preduals. + +We will need + +**DEFINITION 4.2.** The *contragredient* of the bimodule *M**H*,*N* is the bimodule *N**H*,*M*, where *H* is conjugate linearly *isomorphic* to *H* (the image of *ξ* is written *ξ*), and the actions are defined by $n\tilde{\xi}m = \overline{m*\xi n*}$. \ No newline at end of file diff --git a/samples/texts/732704/page_7.md b/samples/texts/732704/page_7.md new file mode 100644 index 0000000000000000000000000000000000000000..349f10e04fbf48f50aac1e6bcfb8283ffbdeb76b --- /dev/null +++ b/samples/texts/732704/page_7.md @@ -0,0 +1,73 @@ +LEMMA 4.3. Suppose $L(M)$ and $R(N)$ are commutants on $\mathfrak{H}$. Then + +$$ +\mathcal{N}\tilde{\mathfrak{H}}_M \otimes_M \mathcal{M}\mathfrak{H}_N \simeq \mathcal{N}L^2(\mathcal{N})_N. +$$ + +PROOF. If $\mathcal{M}\mathfrak{H} \simeq \mathcal{M}R^2(\mathcal{M})q$, then $\mathcal{N} \simeq qM_\infty(\mathcal{M})q$ by (2.3), and $\tilde{\mathcal{H}}_M \simeq qC^2(\mathcal{M})_M$. By Proposition 3.2 and the comment preceding Proposition 2.1, + +$$ +\mathcal{N} \tilde{\mathfrak{H}}_M \otimes_M \mathcal{M} \mathfrak{H} \mathcal{N} \cong \mathcal{N}(q L^2(M_\infty(\mathcal{M}))q)_\mathcal{N} \cong \mathcal{N} L^2(q M_\infty(\mathcal{M})q)_\mathcal{N} \cong \mathcal{N} L^2(\mathcal{N})_\mathcal{N}. +$$ + +Lemma 4.3 was first proven by Sauvageot (in another way). In our situation it +means + +$$ +F_{\bar{\mathfrak{H}}} \circ F_{\bar{\mathfrak{H}}}(\mathcal{N}\mathfrak{K}) \approx L^2(\mathcal{N}) \otimes_{\mathcal{N}} \mathcal{N}\mathfrak{K} \approx \mathcal{N}\mathfrak{K}. +$$ + +(Here we have used the associativity of the RTP, which is most easily seen from +the explicit construction in Section 5.) We conclude that $F_{\bar{\mathfrak{H}}} \circ F_{\bar{\mathfrak{H}}}$ is equivalent to +the identity functor on *Left L2Mod(N)*, and by a symmetric argument $F_{\bar{\mathfrak{H}}} \circ F_{\bar{\mathfrak{H}}}$* +is equivalent to the identity functor on *Left L2Mod(M)*. Thus $F_{\bar{\mathfrak{H}}}$ is a Morita +equivalence, and the first implication of Theorem 4.1 is proved. + +Now let $F$ be a Morita equivalence from $\textit{Left } L^2\textit{Mod}(\mathcal{N})$ to $\textit{Left } L^2\textit{Mod}(M)$. +Then $F$ must take $\mathcal{N}R^2(\mathcal{N})$ to a module isomorphic to $\mathcal{M}R^2(M)$, because each is +in the unique isomorphism class which absorbs all other modules. (This is meant +in the sense that $\mathcal{N}R^2(\mathcal{N}) \oplus \mathcal{N}\mathfrak{H} \simeq \mathcal{N}R^2(\mathcal{N})$; $R^2(\mathcal{N})$ is the "infinite element" in the +monoid $V(M_\infty(\mathcal{N}))$.) +Being an invertible *-functor, $F$ implements a *-isomorphism +of commutants - call it $\sigma$, not $F$, to ease the notation: + +$$ +(4.2) \qquad \sigma : M_{\infty}(\mathcal{N}) \xrightarrow{\sim} M_{\infty}(M). +$$ + +Apparently we have + +$$ +(4.3) \qquad F(R^2(\mathcal{N})q) \simeq R^2(\mathcal{M})\sigma(q). +$$ + +Before continuing the argument, we need an observation: isomorphic algebras +have isomorphic standard forms. Specifically, we may write $L^2(M_\infty(\mathcal{N}))$ as the +GNS construction for $\varphi \in M_\infty(\mathcal{N})_*^+$ and obtain the isomorphism + +$$ +(\sigma^{-1})^t : L^2(M_{\infty}(N)) \xrightarrow{\sim} L^2(M_{\infty}(M)), \quad (\sigma^{-1})^t : x\varphi^{1/2} \mapsto \sigma(x)(\varphi \circ \sigma^{-1})^{1/2}. +$$ + +Note that $(\sigma^{-1})^t(x\xi y) = \sigma(x)[(\sigma^{-1})^t(\xi)]\sigma(y)$. + +Now consider the RTP functor for the bimodule + +$$ +\mathcal{M}\mathfrak{H}_{\mathcal{N}} = {}_{\sigma^{-1}(\mathcal{M})}\sigma^{-1}(e_{11}^{\mathcal{M}})C^2(\mathcal{N}){}_{\mathcal{N}}. +$$ + +By Proposition 3.2 and the comment preceding Proposition 2.1, its action is + +$$ +R^2(\mathcal{N})q \mapsto {}_{\sigma^{-1}(M)}\sigma^{-1}(e_{11}^M)L^2(M_\infty(\mathcal{N}))q \stackrel{(\sigma^{-1})^t}{\simeq} {}_{M}e_{11}^M L^2(M_\infty(M))\sigma(q) \\ +\simeq {}_M R^2(M)\sigma(q) \simeq F(R^2(\mathcal{N})q). +$$ + +We conclude that $F$ is equivalent to $F_{\bar{\mathfrak{H}}}$, which finishes the proof of Theorem 4.1. + +Notice that (4.2) and (4.3) can also be used to define a functor; this gives us + +COROLLARY 4.4. For two von Neumann algebras *M* and *N*, the following are equivalent: + +(1) *M* and *N* are Morita equivalent; \ No newline at end of file diff --git a/samples/texts/732704/page_8.md b/samples/texts/732704/page_8.md new file mode 100644 index 0000000000000000000000000000000000000000..2f6422b030556a4e1397925952d12f0c614703d0 --- /dev/null +++ b/samples/texts/732704/page_8.md @@ -0,0 +1,51 @@ +(2) $M_\infty(\mathcal{N}) \simeq M_\infty(\mathcal{M});$ + +(3) there is a bimodule $\mathcal{M}\mathfrak{S}\mathcal{N}$ where the actions are commutants of each other; + +(4) there is a projection $q \in M_\infty(\mathcal{M})$ with central support 1 such that + +$$ +qM_{\infty}(\mathcal{M})q \approx \mathcal{N}. +$$ + +(The central support of $x \in \mathcal{M}$ is the least projection $z$ in the center of $\mathcal{M}$ satisfying $x = zx$.) + +**Example continued.** $M_3$ and $M_5$ are Morita equivalent. This can be deduced easily from condition (2) or (4) of the corollary above. It also follows from the (Hilbert) equivalence bimodule $\text{\textit{HS}}(M_{3\times5})_{M_5}$, where “HS” indicates the Hilbert-Schmidt norm; this bimodule gives us an RTP functor which is a Morita equivalence. Regardless of the construction, the equivalence will send (functorially) $n$ copies of $\mathbb{C}^5$ to $n$ copies of $\mathbb{C}^3$. Apparently Morita equivalence is a coarse relation on type I algebras - it only classifies the *center* of the algebra (up to isomorphism). At the other extreme, Morita equivalence for type III algebras is the same as algebraic isomorphism; Morita equivalence for type II algebras is somewhere in the middle ([R], Sec. 8). + +For a bimodule $\mathcal{M}\mathfrak{S}\mathcal{N}$ where the algebras are not necessarily commutants, the functor (4.1) still makes sense. To get a more tractable object, we may consider the domain and range to be isomorphism classes of modules: + +$$ +(4.4) \qquad \pi_{\mathfrak{S}} : V(M_{\infty}(\mathcal{N})) \to V(M_{\infty}(\mathcal{M}; +$$ + +$$ +F_{\mathfrak{S}}(R^2(\mathcal{N})q) = \mathcal{M}\mathfrak{S}\mathcal{N} \otimes_{\mathcal{N}} R^2(\mathcal{N})q \simeq R^2(\mathcal{M})\pi_{\mathfrak{S}}([q]). +$$ + +We call this the *bimodule morphism* associated to $\mathfrak{S}$, a sort of “skeleton” for +the correspondence. It follows from Proposition 3.3 that if the bimodule is $\rho: \mathcal{N} \hookrightarrow$ +$qM_{\infty}(\mathcal{M})q$, then $\pi_{\mathfrak{S}}$ is nothing but $\rho^{\infty}$, the amplification of $\rho$ to +$M_{\infty}(\mathcal{N})$, restricted +to equivalence classes of projections. + +This has an important application to inclusions of algebras. We have seen in +Section 3 that a unital inclusion $\mathcal{N} \subset \mathcal{M}$ is equivalent to a bimodule $\mathcal{M}L^2(\mathcal{M})_\mathcal{N}$. +When the correspondence $\rho$ is surjective, the module generates a Morita equivalence +via its RTP functor, and the induced bimodule morphism is an isomorphism of +monoids. When $\mathcal{N} \neq \mathcal{M}$, it is natural to expect that the bimodule morphism gives +us a way to measure the relative size, or *index*, of $\mathcal{N}$ in $\mathcal{M}$. (For readers unfamiliar +with this concept, the index of an inclusion $\mathcal{N} \subset \mathcal{M}$ is denoted $[\mathcal{M} : \mathcal{N}]$ and is +analogous to the index of a subgroup. The kernel of this idea goes back to Murray +and von Neumann, but the startling results of Jones [J] in the early 1980's touched +off a new wave of investigation. We recommend the exposi! ti! on [K] as a nice +starting point.) + +For algebras of type I or II, the index can be calculated in terms of bimodule +morphisms. (There are also broader definitions of index which require a conditional +expectation (=norm-decreasing projection) from $\mathcal{M}$ onto $\mathcal{N}$.) This amounts largely +to rephrasing and extension of the paper [Jol], and we do not give details here. Very +briefly, let $\pi: V(M_{\infty}(\mathcal{M})) \to V(M_{\infty}(\mathcal{M}))$ be the bimodule morphism for + +$$ +(4.5) \qquad (\mathcal{M}L^2(\mathcal{M})_\mathcal{N}) \otimes_\mathcal{N} (\mathcal{N}L^2(\mathcal{M})_\mathcal{M}). +$$ \ No newline at end of file diff --git a/samples/texts/732704/page_9.md b/samples/texts/732704/page_9.md new file mode 100644 index 0000000000000000000000000000000000000000..fd7bde2ddbe1b5f6ea90d98ee61f3f5a3cad1d7b --- /dev/null +++ b/samples/texts/732704/page_9.md @@ -0,0 +1,33 @@ +When $\mathcal{M}$ is a factor, $\pi$ is a monoidal morphism on the extended nonnegative integers (type I) or extended nonnegative reals (type II). It must be multiplication by a scalar, and this scalar is the index. If $\mathcal{M}$ is not a factor, the index is the spectral radius of $\pi$, provided that $V(M_\infty(\mathcal{M}))$ is endowed with some extra structure (at present it is not even a vector space). + +**Example.** Consider the correspondence + +$$ M_6 L^2(M_6)_{M_3}; \quad M_3 \simeq M_3 \otimes I \subset M_3 \otimes M_2 \simeq M_6. $$ + +The image of $M_6 L^2(M_6)$ under the RTP functor for (4.5) is + +$$ \begin{align*} & (M_6 L^2(M_6)_{M_3}) \otimes_{M_3} (M_3 L^2(M_6)_{M_6}) \otimes_{M_6} (M_6 L^2(M_6)) \\ & \simeq {}_{M_6}L^2(M_6)_{M_3} \otimes_{M_3} {}_{M_3}L^2(M_6) \\ \text{(now counting the dimensions of the Hilbert spaces)} \end{align*} $$ + +$$ \simeq {}_{M_6}HS(M_{12 \times 3})_{M_3} \otimes_{M_3} {}_{M_3}HS(M_{3 \times 12}) \simeq {}_{M_6}HS(M_{12 \times 12}) \simeq {}_{M_6}HS(M_{6 \times 24}). $$ + +We have gone from 6 copies of $\mathbb{C}^6$ to 24 copies; that is, + +$$ \pi : (\mathbb{Z}_+ \cup \infty) \to (\mathbb{Z}_+ \cup \infty), \quad 6 \mapsto 24. $$ + +Apparently the index is 4, which is also the ratio of the dimensions of the algebras. + +**5. Analytic approaches to RTPs** + +As mentioned in Section 3, we cannot expect the expression $\xi \otimes_M \eta$ to make sense for every pair of vectors $\xi, \eta$. In essence, the problem is that the product of two $L^2$ vectors is $L^1$, and an $L^1$ space does not lie inside its corresponding $L^2$ space unless the underlying measure is atomic. Densities add, even in the noncommutative setting, and so the product in (3.3) “should” be an $L^1$ matrix. To make this work at the vector level, we need to decrease the density by 1/2 without affecting the “outside” action of the commutants... and the solution by Connes and Sauvageot ([Sa], [C2]) is almost obvious: choose a faithful $\varphi \in M_*^+$ and put $\varphi^{-1/2}$ in the middle of the product. That is, + +$$ (5.1) \qquad \xi \otimes_\varphi \eta = (\xi\varphi^{-1/2})\eta. $$ + +This requires some explanation. + +Fix faithful $\varphi \in M_*^+$ and row and column representations of $\mathfrak{H}$ and $\mathfrak{K}$ as in (2.1). We define + +$$ \mathcal{D}(\mathfrak{H}, \varphi) = \left\{ \begin{pmatrix} x_1 \varphi^{1/2} \\ x_2 \varphi^{1/2} \\ \vdots \end{pmatrix} \in \mathfrak{H} : \sum x_n^* x_n \text{ exists in } \mathcal{M} \right\}. $$ + +$\mathcal{D}(\mathfrak{H}, \varphi)$ is dense in $\mathfrak{H}$, and its elements are called $\varphi$-left bounded vectors [C1]. + +Now by (5.1) we mean the following: for $\xi \in \mathcal{D}(\mathfrak{H}, \varphi)$, we simply erase the symbol $\varphi^{1/2}$ from the right of each entry, then carry out the multiplication. The natural domain is $\mathcal{D}(\mathfrak{H}, \varphi) \times \mathfrak{K}$. Visually, \ No newline at end of file diff --git a/samples/texts/7442509/page_1.md b/samples/texts/7442509/page_1.md new file mode 100644 index 0000000000000000000000000000000000000000..3bbdf4fdad89d4ce72808e2d87aecf7658df79fa --- /dev/null +++ b/samples/texts/7442509/page_1.md @@ -0,0 +1,22 @@ +Research Article + +Improving the Efficiency of Ratio Estimators by Calibration +Weightings + +*Etebong P. Clement¹ and Elisha J. Inyang² + +¹,²Department of Statistics, University of Uyo, P. M. B. 1017, Uyo - Nigeria + +\*Corresponding Author Email: epclement@yahoo.com; inyang. elisha@yahoo.com + +It is observed that the performances of most improved ratio estimators depend on some optimality conditions that need to be satisfied to guarantee better estimator. This paper develops a new approach to ratio estimation that produces a more efficient class of ratio estimators that do not depend on any optimality conditions for optimum performance using calibration weightings. The relative performances of the proposed calibration ratio estimators are compared with a corresponding global [Generalized Regression (GREG)] estimator. Results of analysis showed that the proposed calibration ratio estimators are substantially superior to the traditional GREG-estimator with relatively small bias, mean square error, average length of confidence interval and coverage probability. In general, the proposed calibration ratio estimators are more efficient than all existing estimators considered in the study. + +**Keywords:** efficiency comparison, existing estimators, global estimator, optimality, conditions, ratio estimator + +INTRODUCTION + +The ratio estimation has gained relevance in Statistical Estimation Theory than the regression estimation because of its improved precision in estimating the population or subpopulation parameters. But the regression estimator, in spite of its lesser practicability, seems to be holding a unique position due to its sound theoretical basis. The classical ratio and product estimators even though considered to be more useful in many practical situations in fields like Agriculture, Forestry, Economics and population studies have efficiencies which does not exceed that of the linear regression estimator. + +This limitation has prompted most survey Statisticians to carry out several researches towards the modification of the existing ratio, product or classes of ratio and product estimators of the population mean in simple random sampling without replacement (SRSWOR) to provide better alternatives and improve efficiency. Among the authors who have proposed improved estimators include [Singh and Tailor (2003), Kadillar and Cingi (2006), Gupta and Shabbir (2008), Sharma and Tailor (2010), Diana et al. (2011), Solanki et al. (2012), Swain (2012), Choudhary and Singh (2012), Subramani and Kumarapandiyan (2012), Singh and Solanki (2012), Khare and Sinha (2012), Haq and Shabbir (2013), Shittu and Adepoju (2014)]. + +However, it has been observed that the performances of most of the proposed improved ratio estimators of the works cited above depend on some optimality conditions that need to be satisfied to guarantee better estimator. A unique approach to addressing these problems is by calibration estimation. The concept of calibration estimator was introduced by Deville and Sarndal (1992) in survey sampling. Calibration estimation is a method that uses auxiliary variable(s) to adjust the original design weights to improve the precision of survey estimates of population or subpopulation parameters. The calibration weights are chosen to minimize a given distance measure (or loss function) and these weights satisfy the constraints related auxiliary variable information. Calibration estimation has been studied by many survey Statisticians. A few key references are [Wu and Sitter (2001), Montanari and Ranalli (2005), Farrel and Singh (2005), Arnab and Singh (2005), Estavao and Sarndal (2006), Kott (2006), Kim et al. (2007), Sarndal (2007), Kim and Park (2010), Kim and Rao (2012), Rao et al. (2012), Koyuncu and Kadilar (2013), Clement et al. (2014), Clement and Enang (2015a,b, 2017), Clement (2016, 2017a,b,c, 2018a,b, 2020, 2021), Enang and Clement (2020)]. \ No newline at end of file diff --git a/samples/texts/7442509/page_2.md b/samples/texts/7442509/page_2.md new file mode 100644 index 0000000000000000000000000000000000000000..36bb8c3fee4b35fad58e12607e4894265a8f079b --- /dev/null +++ b/samples/texts/7442509/page_2.md @@ -0,0 +1,95 @@ +This paper develops the theory of calibration estimators for ratio estimation and proposes a class of calibration ratio-type estimators based on Subramani and Kumarapandiyan (2012) ratio-type estimators under the simple random sampling without replacement (SRSWOR). + +METHODOLOGY + +Subramani and Kumarapandiyan (2012) proposed a set of four ratio-type estimators using known values of the population quartiles of the auxiliary variable and their functions as given by the following: + +$$ +(i) \quad \hat{Y}_1 = \bar{y} \frac{\bar{x} + Q_3}{\bar{x} + Q_3} \qquad (1) +$$ + +$$ +B(\hat{Y}_1) = \gamma \bar{Y} (\vartheta_1^2 C_x^2 - \vartheta_1 C_x C_y \rho) +$$ + +$$ +MSE(\hat{Y}_1) = \gamma \bar{Y}^2 (C_y^2 + \vartheta_1^2 C_x^2 - 2 \vartheta_1 C_x C_y \rho) +$$ + +(ii) $\hat{Y}_2 = \bar{y} \frac{\bar{x}+Q\alpha}{\bar{x}+Q\alpha}$ + +$$ +B(\hat{Y}_2) = \gamma \bar{Y} (\vartheta_2^2 C_x^2 - \vartheta_2 C_x C_y \rho) +$$ + +$$ +MSE(\hat{Y}_1) = \gamma \bar{Y}^2 (C_y^2 + \vartheta_2^2 C_x^2 - 2 \vartheta_2 C_x C_y \rho) +$$ + +(iii) $\hat{Y}_3 = \bar{y} \frac{\bar{x}+Q_\eta}{\bar{x}+Q_\eta}$ + +$$ +B(\hat{Y}_3) = \gamma \bar{Y} (\vartheta_3^2 C_x^2 - \vartheta_3 C_x C_y \rho) +$$ + +$$ +MSE(\hat{Y}_3) = \gamma \bar{Y}^2 (C_y^2 + \vartheta_3^2 C_x^2 - 2 \vartheta_3 C_x C_y \rho) +$$ + +(iv) $\hat{Y}_4 = \bar{y} \frac{\bar{x}+Q_{\phi}}{\bar{x}+Q_{\phi}}$ + +$$ +B(\hat{Y}_4) = \gamma \bar{Y} (\vartheta_4^2 C_x^2 - \vartheta_4 C_x C_y \rho) +$$ + +$$ +MSE(\hat{Y}_4) = \gamma \bar{Y}^2 (C_y^2 + \vartheta_4^2 C_x^2 - 2\vartheta_4 C_x C_y \rho) +$$ + +where $Q_1$ is the first quartile or the lower quartile, $Q_3$ is the third quartile or the upper quartile, $Q_\alpha = (Q_3 - Q_1)$ denotes the inter-quartile range, $Q_\eta = (Q_3 - Q_1)/2$ denotes the semi-quartile range, $Q_\phi = (Q_3 + Q_1)/2$ denotes the quartile average, $\gamma = (\frac{1}{n} - \frac{1}{N})$, $\vartheta_1 = (\frac{\bar{x}}{\bar{x}+Q_3})$, $\vartheta_2 = (\frac{\bar{x}}{\bar{x}+Q_\alpha})$, $\vartheta_3 = (\frac{\bar{x}}{\bar{x}+Q_\eta})$, and $\vartheta_4 = (\frac{\bar{x}}{\bar{x}+Q_\phi})$. + +These proposed ratio estimators by Subramani and Kumarapandiyan (2012), [as given in equations (1) – (4) above], were more efficient than the Sisodia and Dwivedi (1981), Singh and Tailor (2003), Singh et al. (2004) and Yan and Tian (2010) ratio-type estimators, except the linear regression estimator. + +The calibration estimator of population mean under the simple random sampling without replacement (SRSWOR) according to Deville and Sarndal (1992), is given by + +$$ +\hat{Y}_{DS} = \sum_{i=1}^{n} W_i \bar{y} +\quad +(5) +$$ + +where $W_i$ are the calibration weights + +Motivated by Deville and Sarndal (1992) and Subramani and Kumarapandiyan (2012), a new set of ratio estimators of population mean using known values of the population quartiles of the auxiliary variable and their functions is proposed under the calibration estimation as given by the following: + +$$ +(i) \quad \hat{\mathring{Y}}_{\xi}^* = \sum_{i=1}^{n} W_i^* \bar{y} \left[ \frac{\bar{x} + Q_3}{\bar{x} + Q_3} \right] \tag{6} +$$ + +$$ +(ii) \quad \hat{Y}_{\alpha}^{*} = \sum_{i=1}^{n} W_i^{*} \bar{y} \left[ \frac{\bar{x} + Q_{\alpha}}{\bar{x} + Q_{\alpha}} \right] +$$ + +$$ +(iii) \quad \hat{Y}_{\eta}^{*} = \sum_{i=1}^{n} W_i^{*} \bar{y} \left[ \frac{\bar{x} + Q_{\eta}}{\bar{x} + Q_{\eta}} \right] \quad (8) +$$ + +(iv) $\hat{Y}_{\phi}^* = \sum_{i=1}^{n} W_i^* \bar{y} \left[ \frac{\bar{x} + Q_{\phi}}{\bar{x} + Q_{\phi}} \right]$ + +(9) + +where $W_i^*$ is the calibration weights such that $0 < W_i^* \le 1$. +The calibration weights $W_i^*$ are chosen such that a chi-square type loss functions of the form: + +$$ +L = \sum_{i=1}^{n} \frac{(W_i^* - d_i)^2}{d_i q_i} +\quad +(10) +$$ + +is minimized while satisfying the calibration constraint + +$$ +\sum_{i=1}^{n} W_i^* X_i = \beta_1(x) +\qquad (11) +$$ \ No newline at end of file diff --git a/samples/texts/7442509/page_3.md b/samples/texts/7442509/page_3.md new file mode 100644 index 0000000000000000000000000000000000000000..e5843c2e61630943328363e585a0e40869a01d54 --- /dev/null +++ b/samples/texts/7442509/page_3.md @@ -0,0 +1,41 @@ +where $q_i$ is the tuning parameter, $\beta_1(x)$ is the coefficient of skewness of the auxiliary variable and $d_i$ is the design weight denoted by $d_i = 1/\pi_i$ where $\pi_i$ is the inclusion probability denoted by $\pi_i = n/N$ so that $d_i = N/n$. + +Minimizing the loss functions (10) subject to the calibration constraint (11) gives the calibration weights in simple random sampling as given by: + +$$W_i^* = d_i + \frac{q_i d_i \bar{X}}{\sum_{i=1}^{n} q_i d_i \bar{X}^2} (\beta_1(x) - \sum_{i=1}^{n} d_i \bar{X}) \quad (12)$$ + +Substituting (12) into equations (6), (7), (8), and (9) respectively and setting $q_i = \bar{X}^{-1}$ gives the proposed calibration ratio estimators under simple random sampling without replacement (SRSWOR) as follows: + +$$\text{(i)} \quad \hat{\mathcal{Y}}_{\xi}^* = \frac{\sum_{i=1}^{n} d_i \bar{y}}{\sum_{i=1}^{n} d_i \bar{X}} \left[ \frac{\bar{X} + Q_3}{\bar{x} + Q_3} \right] \beta_1(x) \quad (13)$$ + +$$\text{(ii)} \quad \hat{\mathcal{Y}}_{\alpha}^* = \frac{\sum_{i=1}^{n} d_i \bar{y}}{\sum_{i=1}^{n} d_i \bar{X}} \left[ \frac{\bar{X} + Q_{\alpha}}{\bar{x} + Q_{\alpha}} \right] \beta_1(x) \quad (14)$$ + +$$\text{(iii)} \quad \hat{\mathcal{Y}}_{\eta}^* = \frac{\sum_{i=1}^{n} d_i \bar{y}}{\sum_{i=1}^{n} d_i \bar{X}} \left[ \frac{\bar{X} + Q_{\eta}}{\bar{x} + Q_{\eta}} \right] \beta_1(x) \quad (15)$$ + +$$\text{(iv)} \quad \hat{\mathcal{Y}}_{\varphi}^* = \frac{\sum_{i=1}^{n} d_i \bar{y}}{\sum_{i=1}^{n} d_i \bar{X}} \left[ \frac{\bar{X} + Q_{\varphi}}{\bar{x} + Q_{\varphi}} \right] \beta_1(x) \quad (16)$$ + +To the first order degree of approximation; the biases and the mean square errors (MSEs) of the proposed set of estimators are given respectively as follows: + +$$\text{(i)} \quad B(\hat{\mathcal{Y}}_{\xi}^*) = \gamma \bar{Y} \omega (\vartheta_{\xi}^2 C_x^2 - \vartheta_{\xi} C_x C_y \rho) \\ \text{MSE}(\hat{\mathcal{Y}}_{\xi}^*) = \gamma \bar{Y}^2 \omega^2 (C_y^2 + \vartheta_{\xi}^2 C_x^2 - 2\vartheta_{\xi} C_x C_y \rho) \quad (17)$$ + +$$\text{(ii)} \quad B(\hat{\mathcal{Y}}_{\alpha}^*) = \gamma \bar{Y} \omega (\vartheta_{\alpha}^2 C_x^2 - \vartheta_{\alpha} C_x C_y \rho) \\ \text{MSE}(\hat{\mathcal{Y}}_{\alpha}^*) = \gamma \bar{Y}^2 \omega^2 (C_y^2 + \vartheta_{\alpha}^2 C_x^2 - 2\vartheta_{\alpha} C_x C_y \rho) \quad (18)$$ + +$$\text{(iii)} \quad B(\hat{\mathcal{Y}}_{\eta}^*) = \gamma \bar{Y} \omega (\vartheta_{\eta}^2 C_x^2 - \vartheta_{\eta} C_x C_y \rho) \\ \text{MSE}(\hat{\mathcal{Y}}_{\eta}^*) = \gamma \bar{Y}^2 \omega^2 (C_y^2 + \vartheta_{\eta}^2 C_x^2 - 2\vartheta_{\eta} C_x C_y \rho) \quad (19)$$ + +$$\text{(iv)} \quad B(\hat{\mathcal{Y}}_{\varphi}^*) = \gamma \bar{Y} \omega (\vartheta_{\varphi}^2 C_x^2 - \vartheta_{\varphi} C_x C_y \rho) \\ \text{MSE}(\hat{\mathcal{Y}}_{\varphi}^*) = \gamma \bar{Y}^2 \omega^2 (C_y^2 + \vartheta_{\varphi}^2 C_x^2 - 2\vartheta_{\varphi} C_x C_y \rho) \quad (20)$$ + +where $\gamma = (\frac{1}{n} - \frac{1}{N})$, $\omega = (\frac{\beta_1(x)}{\bar{x}})$, $\vartheta_\xi = (\frac{\bar{X}}{\bar{x}+Q_3})$, $\vartheta_\alpha = (\frac{\bar{X}}{\bar{x}+Q_\alpha})$, $\vartheta_\eta = (\frac{\bar{X}}{\bar{x}+Q_\varphi})$, and $\vartheta_\varphi = (\frac{\bar{X}}{\bar{x}+Q_\varphi})$ + +# ANALYTICAL STUDY + +## Efficiency comparison + +In this section, the MSEs of some existing estimators are compared with the MSEs of the new estimators. For clarity and convenience of the targeted readership, let the biases and MSEs of the set of new estimators discussed in section 2 be represented in a single class as follows: + +$$B(\hat{\mathcal{Y}}_i^*) = \gamma \bar{Y} \omega (\vartheta_i^2 C_x^2 - \vartheta_i C_x C_y \rho) \\ MSE(\hat{\mathcal{Y}}_i^*) = \gamma \bar{Y}^2 \omega^2 (C_y^2 + \vartheta_i^2 C_x^2 - 2\vartheta_i C_x C_y\rho) \quad (21)$$ + +where $i=1,2,3,4$ and + +$$\left. \begin{aligned} \vartheta_1 &= \left( \frac{\bar{X}}{\bar{x} + Q_3} \right) \\ \vartheta_2 &= \left( \frac{\bar{X}}{\bar{x} + Q_\alpha} \right) \\ \vartheta_3 &= \left( \frac{\bar{X}}{\bar{x} + Q_\eta} \right) \\ \vartheta_4 &= \left( \frac{\bar{X}}{\bar{x} + Q_\varphi} \right) \end{aligned} \right\} (22)$$ + +Similarly, let the biases and MSEs of the set of modified ratio estimators proposed by Subramani and Kumarapandiyan (2012) be represented in a single class as follows: \ No newline at end of file diff --git a/samples/texts/7442509/page_4.md b/samples/texts/7442509/page_4.md new file mode 100644 index 0000000000000000000000000000000000000000..52ec7f897db425b15c65b10f8ca0a389bcd92550 --- /dev/null +++ b/samples/texts/7442509/page_4.md @@ -0,0 +1,76 @@ +$$ +\begin{align*} +B(\hat{Y}_i^*) &= \gamma \bar{Y} (\vartheta_i^2 C_x^2 - \vartheta_i C_x C_y \rho) \\ +MSE(\hat{Y}_i^*) &= \gamma \bar{Y}^2 (C_y^2 + \vartheta_i^2 C_x^2 - 2\vartheta_i C_x C_y \rho) +\end{align*} +$$ + +where $\vartheta_i$ is as defined in equation (22). + +Subramani and Kumarapandiyan (2012) estimators + +It is observed from equations (21) and (23), that the new calibration ratio estimators would be more efficient than the existing modified ratio estimators of Subramani and Kumarapandiyan (2012) if + +$$ +\omega^2 \le 1 \tag{24} +$$ + +Classical ratio estimator + +The classical ratio estimator of mean by Hansen et al. (1946) is given by + +$$ +\hat{Y}_R = \frac{\bar{y}}{\bar{x}} \bar{x} \tag{25} +$$ + +The bias and MSE are respectively given by + +$$ +B(\hat{Y}_R) = \gamma \bar{Y} (C_x^2 - C_x C_y \rho) \text{ and} +$$ + +$$ +MSE(\hat{Y}_R) = \gamma \bar{Y}^2 (C_y^2 + C_x^2 - 2C_x C_y \rho) \quad (26) +$$ + +It is observed from equations (21) and (26), that the new calibration ratio estimators would be more efficient than the classical ratio estimator of Hansen *et al*. (1946) if + +$$ +\omega^2 \leq \frac{(\phi^2 - 2\phi\rho + 1)}{(\vartheta_i^2 \phi^2 - 2\vartheta_i \phi\rho + 1)} \quad (27) +$$ + +where $\phi = (C_x/C_y)$ + +Classical regression estimator + +The classical regression estimator proposed by Hansen *et al*. (1953) is given by + +$$ +\hat{Y}_{lr} = \bar{y} + b(\bar{X} - \bar{x}) \tag{28} +$$ + +where $b = (\frac{s_{xy}}{s_x^2})$ is the sample regression coefficient of y on x with + +$$ +MSE(\hat{Y}_{lr}) = \gamma \bar{Y}^2 C_y^2 (1 - \rho^2) \quad (29) +$$ + +It is observed from equations (21) and (29), that the new calibration ratio estimators would be more efficient than the classical regression estimator of Hansen *et al*. (1953) if + +$$ +\omega^2 \leq \frac{(1 - \rho^2)}{(\vartheta_i^2 \phi^2 - 2\vartheta_i \phi \rho + 1)} \quad (30) +$$ + +where $\phi = (C_x/C_y)$ + +The percent relative efficiency (PRE) + +In this section, the percent relative efficiency (PRE) of members of the proposed class of calibration ratio estimators and the existing estimators considered in this study are computed and presented in Table 2. The percent relative efficiency (PRE) of an estimator $\theta$ with respect to the classical ratio estimator $(\hat{Y}_R)$ is defined by + +$$ +PRE(\theta, \hat{Y}_R) = \frac{Var(\hat{Y}_R)}{Var(\theta)} \times 100 \quad (31) +$$ + +EMPIRICAL STUDY + +To investigate the theoretical results, as well as, test the optimality and efficiency performance of the new calibration estimators over other existing estimators considered in this study, the data of the following populations [as adapted from Subramani and Kumarapandiyan (2012)] were used. \ No newline at end of file diff --git a/samples/texts/7442509/page_5.md b/samples/texts/7442509/page_5.md new file mode 100644 index 0000000000000000000000000000000000000000..2155ad2396415a91ac514663e2f34e034fc2880e --- /dev/null +++ b/samples/texts/7442509/page_5.md @@ -0,0 +1,42 @@ +**Population I** + +$X$ is Fixed Capital and $Y$ is Output for 80 factories in a region + +$$ +\begin{array}{l @{\quad} l @{\quad} l @{\quad} l @{\quad} l @{\quad} l} +N = 80 & n = 20 & \bar{Y} = 51.8264 & \bar{X} = 11.2646 & \rho = 0.9413 \\ +C_x = 0.7507 & C_y = 0.3542 & S_x = 8.4563 & S_y = 18.3569 & \beta_{1(x)} = 1.0500 \\ +\beta_{2(x)} = 0.0634 & Q_3 = 16.975 & Q_\alpha = 16.975 & Q_\Omega = 5.9125 & Q_\varphi = 11.0625 \\ +\text{[Murthy (1967)]} & & & & \\ +\end{array} + $$ + +**Population II** + +$X$ is data on number of workers and $Y$ is Output for 80 factories in a region. + +$$ +\begin{array}{l @{\quad} l @{\quad} l @{\quad} l @{\quad} l @{\quad} l} +N = 80 & n = 20 & \bar{Y} = 51.8264 & \bar{X} = 2.8513 & \rho = 0.9150 \\ +C_x = 0.9484 & C_y = 0.3542 & S_x = 2.7042 & S_y = 18.3569 & \beta_{1(x)} = 0.6978 \\ +\beta_{2(x)} = 1.3005 & Q_3 = 4.4750 & Q_\alpha = 3.6150 & Q_\Omega = 1.8075 & Q_\varphi = 2.6675 \\ +\text{[Murthy (1967)]} & & & & \\ +\end{array} + $$ + +Table 1: Bias and MSE of the proposed estimators + +
EstimatorsPopulation IPopulation II
BiasMSEBiasMSE
̂̃*ξ0.00180.01350.00790.1354
̂̃*α0.00220.01350.01870.1766
̂̃*η0.01420.03470.07080.5197
̂̃*ϕ0.00310.01430.03860.2892
+ +Table 2: Performance of estimators from analytical study + +
EstimatorsPopulation IPopulation II
MSEPREMSEPRE
Classical ratio18.976410041.3170100
Classical Regression1.44001,317.80562.05722,008.4095
Subramani &
Kumarapandiyan
(2012)
11.55621,219.40622.26041,827.8623
21.54861,225.39092.94891,401.0987
33.9830476.43488.6761476.2163
41.64701,152.17974.8281855.7611
Proposedξ*0.0135140,565.92590.135430,514.7711
α*0.0135140,565.92590.176623,395.8097
η*0.034754,687.03170.51977,950.1636
ϕ*0.0143132,702.09790.289214,286.6528
+ +## SIMULATION STUDY + +### Background and analytical set-up + +The data used is obtained from the 2005 socio-economic household survey of Akwa Ibom State conducted by the ministry of economic development, Uyo, Akwa Ibom State, Nigeria [Akwa Ibom State Government (2005)]. The study variable $Y$, represents the household expenditure on food and auxiliary variable $X$, represents the household income. +An assisting model of the form: $y_i = \beta_0 + \beta_1 x_i + e_i$ was designed for the calibration estimators, where $e_i$ are independently generated by the standard normal distribution. + +The simulation study was conducted using the R-statistical package. There were $B = 1,500$ for the b-th run ($b = 1, 2, ..., B$), a Bernoulli sample was drawn where each unit was selected into the sample independently, with inclusion probability $\pi_i = n/N$. For simplicity the tuning parameter $q_i$ was set to unity ($q_i = 1$) and $n = 300$. The corresponding Greg-estimator and calibration ratio estimators of $\bar{Y}$ are computed: $\bar{Y}_{GREG}^{*(b)}, \hat{\bar{Y}}_{\xi}^{*(b)}, \hat{\bar{Y}}_{\alpha}^{*(b)}, \hat{\bar{Y}}_{\eta}^{*(b)}$ and $\hat{\bar{Y}}_{\varphi}^{*(b)}$. The results of the analysis are given in table 3. \ No newline at end of file diff --git a/samples/texts/7442509/page_6.md b/samples/texts/7442509/page_6.md new file mode 100644 index 0000000000000000000000000000000000000000..768b50c6abc357dfe63add8cb139a5ac233739fb --- /dev/null +++ b/samples/texts/7442509/page_6.md @@ -0,0 +1,35 @@ +## Comparisons with a Global Estimator + +The concept of calibration estimators proposed by Deville and Sarndal (1992) is simply a class of linearly weighted estimators, of which the Generalized Regression (GREG) estimator is a special member. Deville and Sarndal (1992) have shown that all calibration estimators are asymptotically equivalent to the GREG-estimator. + +In this section, the efficiency performance of the new calibration ratio estimators is compared with a corresponding global estimator [Generalized Regression (GREG) estimator] and the results of analysis are presented in Table 3. + +### Simulation evaluation + +Let $\hat{Y}_i^{*(b)}$ be the estimate of $\hat{\bar{Y}}_i^*$ in the $b$-th simulation run; $b = 1, 2, ..., B$ ($B=1,500$), for a given estimator (say) $\hat{\bar{Y}}_i^*$. To compare the efficiency performance of the new calibration ratio estimators with the GREG-estimator, the following criteria; bias (B), mean square error (MSE), average length of confidence interval (AL) and coverage probability (CP) of $\hat{\bar{Y}}_i^*$ were used. Each measure is calculated as follows: + +$$ (i) \quad B(\hat{\bar{Y}}_i^*) = \hat{\bar{Y}}_i^* - \hat{\bar{Y}}_i^{*(b)} $$ + +where $\hat{\bar{Y}}_i^* = \frac{1}{B} \sum_{b=1}^{B} \hat{\bar{Y}}_i^{*(b)}$ + +$$ (ii) \quad \text{MSE}(\hat{\bar{Y}}_i^*) = \frac{\sum_{b=1}^{B} (\hat{\bar{Y}}_i^{*(b)} - \bar{\hat{Y}}_i^*)^2}{B} $$ + +where $\hat{\bar{Y}}_i^{*(b)}$ is the estimated total based on sample $b$ and $B$ is the total number of samples drawn for the simulation. + +$$ (iii) \quad CP(\hat{\bar{Y}}_i^*) = \frac{1}{B} \sum_{b=1}^{B} (\hat{\bar{Y}}_L^{*(b)} < \hat{\bar{Y}}_i^{*(b)} < \hat{\bar{Y}}_U^{*(b)}) $$ + +where $\hat{\bar{Y}}_L^{*(b)}$ is the lower confidence limit and $\hat{\bar{Y}}_U^{*(b)}$ is the upper confidence limit. + +A coverage probability of 95% confidence interval is the ratio of the number of times the true population total is included in the interval to the total number of runs or the number of replicates. For each estimator of $\hat{\bar{Y}}_i^*$, a 95% confidence interval $(\hat{\bar{Y}}_L^{*(b)}, \hat{\bar{Y}}_U^{*(b)})$ is constructed, where + +$$ \begin{align*} \hat{\bar{Y}}_L^{*(b)} &= \hat{\bar{Y}}_i^{*(b)} - 1.96 \sqrt{V(\hat{\bar{Y}}_i^{*(b)})}, \\ \hat{\bar{Y}}_U^{*(b)} &= \hat{\bar{Y}}_i^{*(b)} + 1.96 \sqrt{V(\hat{\bar{Y}}_i^{*(b)})} \text{ and} \\ V(\hat{\bar{Y}}_i^{*(b)}) &= \frac{1}{B-1} \sum_{m=1}^{B} (\hat{\bar{Y}}_i^{*(b)} - \bar{\hat{Y}}_m^*)^2 \end{align*} $$ + +$$ (iv) \quad AL(\hat{\bar{Y}}_i^*) = \frac{1}{B} \sum_{b=1}^{B} (\hat{\bar{Y}}_U^{*(b)} - \hat{\bar{Y}}_L^{*(b)}) $$ + +Table 3: Performance of estimators from simulation study + +
EstimatorsBMSEALCP
GREG0.008284226902.250.9174
Proposedξ*0.006241182762.460.9076
α*0.006541696770.060.9085
η*0.007342012779.980.9102
φ*0.006841904774.920.9094
+ +## DISCUSSION OF RESULTS + +The results of analytical study presented in Table 2 showed that all the proposed calibration ratio estimators are more efficient than both the classical regression estimator and all modified ratio-type estimators proposed by Subramani and Kumarapandiyan (2012) under the two populations considered in the study with relatively high percent gains in efficiency. Again, it is observed that all the proposed calibration ratio estimators are almost unbiased as is reflected by their respective bias in Table 1 under the two population considered in the study. \ No newline at end of file diff --git a/samples/texts/7442509/page_7.md b/samples/texts/7442509/page_7.md new file mode 100644 index 0000000000000000000000000000000000000000..ac9c3ee0ac53006fd0c691481b0d753e2d88c46d --- /dev/null +++ b/samples/texts/7442509/page_7.md @@ -0,0 +1,35 @@ +For the simulation evaluation, the results of the residual diagnostics showed the $R^2$ value as 0.7489 indicating that the model is significant. The correlation between the study variable *Y* and the auxiliary variable *X* is $ρ_{xy} = 0.8934$ is strong and sufficient implying that the calibration ratio estimators would provide better estimates of the population mean. + +Analysis for the comparison of performance of estimators showed that the biases of 0.62 percent, 0.65 percent, 0.73 percent, 0.68 percent, and 0.82 percent respectively for the four calibration ratio estimators and the GREG-estimator are negligible. But the bias of the GREG-estimator though negligible is the most biased among the estimators while the bias of the proposed calibration ratio estimator number one is the least biased among all the estimators considered. The variance for the GREG-estimator is significantly larger than those of the four calibration ratio estimators, as is indicated by their respective mean square errors in table 3. The average length of the confidence interval for each of the calibration ratio estimators is significantly smaller than that of the GREG-estimator. Similarly, the coverage probability of each of the calibration ratio estimators is also smaller than that of the GREG-estimator. These results showed that there is greater variation in the estimates made by the GREG-estimator than the calibration ratio estimators. + +Within the calibration ratio estimators, calibration ratio estimator number one possesses better appealing statistical properties than the other three calibration ratio estimators as is reflected in table 3. + +In general, the proposed calibration ratio estimators are substantially superior to the traditional GREG-estimator with relatively small bias, mean square error, average length of confidence interval and coverage probability. + +## CONCLUSION + +This paper develops the concept of calibration estimator for ratio estimation and proposes a set of four calibration ratio estimators of population mean under the simple random sampling without replacement (SRSWOR) based on Subramani and Kumarapandiyan (2012) ratio-type estimators. Their MSEs are derived under large sample approximation and compared with those of related existing estimators in theory. Analytical and numerical results showed that the four proposed calibration ratio estimators are each always more efficient than the classical regression estimator and all related existing estimators considered in the study. + +A comparison with a corresponding global estimator [the Generalized Regression (GREG) estimator] showed that the four proposed calibration ratio estimators are each substantially superior to the traditional GREG-estimator with relatively small bias, mean square error, average length of confidence interval and coverage probability. + +These results proved the dominance of the new proposal over existing estimators and thus provide better alternative estimators in practical situations. + +## REFERENCES + +Akwa Ibom State Government (2005): Report of the Socio-economic study of Akwa Ibom State. Ministry of Economic Development, Uyo, Akwa Ibom State - Nigeria. + +Arnab, R. and Singh, S. (2005): A note on variance estimation for the generalized regression predictor. *Australia and New Zealand Journal of Statistics*. Vol.47 No.2 pp 231-234. + +Choudhary, S. and Singh, B.K. (2012): A class of chain ratio-cum-dual to ratio type estimator with two auxiliary characters under double sampling in sample surveys. *Statistics in Transition-New Series*, Vol.13 No.3 pp 519-536. + +Clement, E. P. & Enang, E. I. (2017): On the Efficiency of Ratio Estimator over the Regression Estimator, *Communication in Statistics: Theory and Methods*, Vol. 46 No.11 pp 5357-5367. + +Clement, E. P. (2016): An Improved Ratio Estimator for Population Mean in Stratified Random Sampling. *European Journal of Statistics and Probability* Vol.4 No. 4 pp:12-17 + +Clement, E. P. (2017a): Efficient Exponential Estimators of Population Mean in Survey Sampling. *International Journal of Mathematics and Computation* Vol. 28 No.3: pp 94-106 + +Clement, E. P. (2017b): Generalized Chain Ratio-Product Estimator for Estimating Population Mean with Auxiliary Variate, *Elixir Statistics Vol.* 106, pp 46471-46478 + +Clement, E. P. (2018a): Improved Family of Ratio Estimators of Finite Population Variance in Stratified Random Sampling. *Biostatistics and Biometrics Open Access Journal Vol.*5 No.2: 555659 DOI:10.19080/BBOAJ.2018.04.555659 + +Clement, E. P. (2018b): A Note on Calibration Weightings in Stratified Double Sampling with Equal Probability, *Communication in Statistics: Theory and Methods*, Vol. 47 No.12 pp :2835-2847 \ No newline at end of file diff --git a/samples/texts/7442509/page_8.md b/samples/texts/7442509/page_8.md new file mode 100644 index 0000000000000000000000000000000000000000..80dae3f9ad959914b21b17e27874bfda0f5f6790 --- /dev/null +++ b/samples/texts/7442509/page_8.md @@ -0,0 +1,59 @@ +Clement, E. P. (2020): On the Improvement of Multivariate Ratio Method of Estimation in Sample Surveys by Calibration Weightings. *Asian Journal of Probability and Statistics* Vol. 10 No. 1 pp 1-12. DOI: 10.9734/AJPAS/2020/v10i130236 + +Clement, E. P. (2021): Ratio-Product Estimator in Stratified Double Sampling based on the coefficient of skewness of the Auxiliary Variable *International Journal of Statistics and Applied Mathematics*, Vol.6 No. 1:pp 24-28 + +Clement, E. P. and Enang, E. I. (2015a): Calibration approach alternative ratio estimator for population mean in stratified sampling. *International Journal of Statistics and Economics*, Vol. 16 No.1 pp 83-93. + +Clement, E. P. and Enang, E. I. (2015b): Multivariate calibration estimation for domain in stratified random sampling. *International Journal of Modern Mathematical Sciences*, Vol. 13 No.2 pp 187-197. + +Clement, E. P.(2017c): Efficient Product-cum-Dual to Product Estimators of Population Mean in Systematic Sampling, *Elixir Statistics Vol.* 106, pp 46487-46493 + +Clement, E. P., Udofia, G.A. and Enang, E. I. (2014): Estimation for domains in stratified random sampling in the presence of non-response. *American Journal of Mathematics and Statistics*, Vol.4 No.2 pp. 65-71. + +Deville, J.C. and Sarndal, C. E. (1992): Calibration estimators in survey sampling. *Journal of the American Statistical Association*, Vol.87pp 376-382. + +Diana, G., Giordan, M. and Perri P. F. (2011): An improved class of estimators for the population mean, *Statistical methods and Applications* Vol. 20 pp123-140, + +Enang, E.I. & Clement, E. P. (2020): An efficient class of calibration Ratio Estimators of Domain Mean in Survey Sampling, Communication in Mathematics and Statistics Vol. 8: pp 279-293 DOI: 10.1007/s40304-018-00174-z + +Estavao, V. M. and Sarndal, C.E. (2006): Survey estimates by calibration on complex auxiliary information. *International Statistical Review* Vol. 74 pp127-147. + +Farrell, P.J. and Singh, S (2005): Model-assisted higher order calibration of estimators of variance. *Australia and New Zealand Journal of Statistics* Vol.47 No. 3 pp 375-383. + +Gupta, S. and Shabbir, J. (2008): On the improvement in estimating the population mean in simple random sampling, *Journal of Applied Statistics* Vol. 35 No.5 pp 559-566, + +Hansen, M.H., Hurwitz, W. N. & Madow, W.G. (1953): *Sample survey methods and theory*. New York: John Wiley and Sons, 456-464. + +Hansen, M.H., Hurwitz, W.N. & Gurney, M. (1946): The problems and methods of the Sample survey of business, *Journal of the American Statistical Association*, Vol. 41 pp 173-189 + +Haq, A. and Shabbir, J. (2013): Improved family of ratio estimators in simple and stratified random Sampling, *Communications in Statistics - Theory and Methods* Vol. 42 No.5 pp 782-799, + +Kadilar, C. and Cingi, H. (2006): An improvement in estimating the population mean by using the correlation coefficient. *Hacettepe Journal of Mathematics and Statistics* Vol.35 No. 1 pp 103-109. + +Khare, B. B. and Sinha, R.R. (2012): Combined class of estimators for ratio and product of two population means in presence of non-response. *International Journal of Statistics and Economics*, Vol. 8 No. 12 pp 12-20. + +Kim, J. K. & Rao, J.N.K (2012): Combining data from two independent surveys: a model-assisted approach. *Biometrika* Vol. 99 pp 85-100. + +Kim, J. K. and Park, M. (2010): Calibration estimation in survey sampling. *International Statistical Review*, Vol. 78 No.1 pp 21-29. + +Kim, J.M. Sungur, E.A. and Heo T.Y. (2007): Calibration approach estimators in stratified sampling. *Statistics and Probability Letters*, Vol. **77** No. 1 pp 99-103. + +Kott, P.S. (2006): Using calibration weighting to adjust for non-response and coverage errors, *Survey Methodology*, Vol. 32 pp 133-142. + +Koyuncu, N. and Kadilar, C. (2013): Calibration estimators using different measures in stratified random sampling. *International Journal of Modern Engineering Research*, Vol. 3 No. 1 pp 415-419. + +Montanari, G. E. and Ranalli, M.G. (2005): Nonparametric model calibration estimation in survey sampling. *Journal of the American Statistical Association*, Vol. 100 pp 1429-1442. + +Murthy, M. N. (1967): *Sampling theory and methods*. Calcutta, India: Statistical Publishing Society. + +Rao, D., Khan, M. G. M. & Khan, S. (2012): Mathematical programming on multivariate calibration estimation in stratified sampling. *World Academy of Science, Engineering and Technology*, VOL. 72 pp 12-27. + +Sarndal, C-E (2007): The calibration approach in survey theory and practice. *Survey Methodology*, Vol. 33 pp 99-119. + +Sharma, B. & Tailor, R. (2010): A new ratio-cum-dual to ratio estimator of finite population mean in simple random sampling. *Global Journal of Science*, Vol. 10 No. 1 pp 27-31. + +Shittu, O. I. & Adepoju, K. A. (2014): on the efficiency of modified ratio estimator based on linear combination of kurtosis, median and quartile deviation. *International Journal of Modern Mathematics Sciences*, Vol. 11 No. 2 pp 103-107. + +Singh, H P and Solanki, R. S. (2012): Improved estimation of population mean in simple random sampling using information on auxiliary attribute, *Applied Mathematics and Computation* Vol. 218 No. 15 pp 7798-7812. + +Improving the Efficiency of Ratio Estimators by Calibration Weightings \ No newline at end of file diff --git a/samples/texts/7442509/page_9.md b/samples/texts/7442509/page_9.md new file mode 100644 index 0000000000000000000000000000000000000000..b2462c8ddc9d9c26fbfbdffa7930faaeeea03898 --- /dev/null +++ b/samples/texts/7442509/page_9.md @@ -0,0 +1,21 @@ +Singh, H. P. & Tailor, R. (2003): Use of known correlation coefficient in estimating the finite population mean. *Statistics in Transition* Vol. 6 No. 4 pp 555-560. + +Singh, HP; Tailor, R and Kakran, MS. (2004): An improved estimator of population means using power transformation. *Journal of the Indian Agricultural Statistics*, Vol. 58 No. 2 pp 223-230. + +Solanki, R. S., Singh, H. P. and Rathour, A. (2012): An alternative estimator for estimating the finite population mean using auxiliary information in sample surveys. *ISRN Probability and Statistics*. Article ID 657682, doi:10.5402/2012/657682 + +Sosidia, B. V. S., and Dwivedi, V. K. (1981): A modified ratio estimator using coefficient of variation of auxiliary variable. *Jour. Ind. Soc. Agri. Stat.*, Vol. 33 No. 1 pp 13-18. + +Subramani, J. & Kumarapandiyan, G. (2012): A class of almost unbiased modified ratio estimators for population mean with known population parameters. *Elixir Statistics* Vol. 44 pp 7411-7415 + +Swain, A.K.P.C. (2012): On classes of modified ratio type and regression-cum-ratio type estimators in sample surveys using two auxiliary variables. *Statistics in Transition-New Series*, Vol. 13 No. 3 pp 473-494. + +Wu, C. & Sitter, R. R. (2001): A Model-calibration approach to using complete auxiliary information from survey data, *Journal of the American Statistical Association* Vol.96 pp185-193. + +Yan, Z. & Tian, B., (2010): Ratio Method to the mean Estimation Using Coefficient of Skewness of Auxiliary variable, ICICA, 2010, Part III, CCIC Vol.106, p103-110. + +Accepted 2 February 2021 + +**Citation:** Clement EP and Inyang EJ (2021). Improving the Efficiency of Ratio Estimators by Calibration Weightings. International Journal of Statistics and Mathematics, 8(1): 164-172. + +**Copyright:** © 2021 Clement and Inyang. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are cited. \ No newline at end of file diff --git a/samples/texts/7449877/page_1.md b/samples/texts/7449877/page_1.md new file mode 100644 index 0000000000000000000000000000000000000000..95be298b4a9f0f75b8fbc5379f84d3747d386a7d --- /dev/null +++ b/samples/texts/7449877/page_1.md @@ -0,0 +1,61 @@ +# Efficient Query Integrity for Outsourced Dynamic +Databases + +Qingji Zheng +Department of Computer Science +University of Texas at San Antonio +qzheng@cs.utsa.edu + +Shouhuai Xu +Department of Computer Science +University of Texas at San Antonio +shxu@cs.utsa.edu + +Giuseppe Ateniese +Department of Computer Science +Sapienza-University of Rome and +Johns Hopkins University +ateniese@di.uniroma1.it + +## ABSTRACT + +As databases are increasingly outsourced to the cloud, data owners require various security assurances. This paper investigates one particular assurance, *query integrity*, by which a database querier (either the data owner or a third party) can verify that its queries were faithfully executed by the cloud server with respect to the outsourced database. Query integrity is investigated in the setting of dynamic databases, where the outsourced databases can be updated by the data owners as needed. We present a formal security definition of query integrity and a provably-secure efficient construction. Our solution improves upon the state-of-the-art solutions by additionally allowing aggregate queries and more flexible join queries. In addition, we provide better performance by eliminating a linear factor in the extra storage complexity for security purpose. Our solution also achieves a trade-off between computational and communication complexities. + +## Categories and Subject Descriptors + +C.2.4 [Communication Networks]: Distributed Systems; H.2 +[DATABASE MANAGEMENT]: + +### General Terms + +Security + +### Keywords + +Dynamic outsourced database, query integrity, authenticated data structure. + +## 1. INTRODUCTION + +When databases are outsourced to the cloud, security issues arise. The concern that outsourced data may be modified or (partially) deleted has led to novel solutions to assuring the *storage integrity* of outsourced data [2, 10, 3, 25]. However, *query integrity*, verifying whether or not queries against outsourced data are faithfully executed, has not been adequately addressed. Intuitively, query integrity aims to assure the queriers, which can be the data owners and third parties (e.g., the data owners' business partners), that + +Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. +CCSW'12, October 19, 2012, Raleigh, North Carolina, USA. +Copyright 2012 ACM 978-1-4503-1665-1/12/10 ...$15.00. + +their queries are executed against the outsourced data (i.e., neither a portion of it nor a modified version of it). Despite some previous studies [11, 13, 17, 18, 16, 19, 23], the problem of query integrity largely remains open. + +## 1.1 Our Contributions + +We present a formal security definition and an efficient construction for query integrity in the setting of outsourced dynamic database. Our solution can be characterized from three perspectives: (i) functionality, (ii) security, and (iii) efficiency. From the perspective of (i) functionality, our solution supports four kinds of queries — selection, projection, join, and aggregate. Whereas, the state-of-the-art solutions [13, 23] only support selection, projection and join queries, but do not support aggregate queries (see Section 5.4 for details). Moreover, our solution supports strictly more *flexible* join queries, namely that the queries do not have to be defined with respect to pre-defined keyword attributes. In contrast, the state-of-the-art solutions [13, 23] only support join queries with respect to pre-defined keyword attributes. + +From the perspective of (ii) security, our solution is provably secure as long as the two underlying building-blocks are provably secure. The first building-block is called *Authenticated Outsourced Ordered Data Set*, and the second building-block is called *Homo-morphic Linear Tag*. Although our concrete solution is based on our specific constructions of these building-block, its security analysis can be directly applied to solutions that use other (perhaps more efficient) building-blocks as long as the building-blocks satisfy their respective security definitions. This is due to our modular construction and “compiler”-like security analysis. + +From the perspective of (iii) efficiency, our solution is characterized as follows. Let $m$ be the number of attributes and $n$ be the number of tuples. + +* Our solution incurs an $O(n)$ storage complexity at the cloud side for security purpose, in contrast to the $O(mn)$ of [13, 23]. + +* For selection query, our solution incurs $O(n)$ exponentiations at the querier side, which is not as efficient as the $O(n)$ hash operations of [13] but more efficient than the $O(n)$ exponentiation operations on bilinear map of [23]. + +Our solution incurs communication of $O(n)$ tags, which is less efficient than the $O(\log n)$ hash values of [13] but comparable to the $O(n)$ of [23]. + +* For projection query, our solution incurs $O(n)$ modular exponentiations at the querier side. This is not as efficient as the $O(n)$ hash operations of [13], but much more efficient than the $O(nk)$ exponentiation operations on bilinear map \ No newline at end of file diff --git a/samples/texts/7449877/page_10.md b/samples/texts/7449877/page_10.md new file mode 100644 index 0000000000000000000000000000000000000000..d3a4400be6fa36afb0faf43534122f228732737f --- /dev/null +++ b/samples/texts/7449877/page_10.md @@ -0,0 +1,21 @@ +
Li et al. [13]Pang et al. [23]This paper
FunctionsSelection, Projection, JoinSelection, Projection, JoinSelection, Projection, Join, Aggregate
TechniqueMerkle-based Hash TreeAggregate Signature with ChainingMerkle-based Hash Tree and HLT
SecuritySoundSoundSound
Data PreProcessingO(n)HashO(mn)ExO(n)Hash+ O(n)Ex
Storage OverheadO(mn)HashO(mn)AggSigO(n)Hash + O(n)Tag
SelectionComputationSN/AO(n)Mu
CommunicationO(log n)HashO(n)Bitmap
ComputationVO(n)HashO(n)Ex
ProjectionComputationSN/AO(kn)Mu
CommunicationO((m-k)n)attributeO(n)Bitmap
ComputationVO(n)HashO(kn)Ex
JoinComputationSN/AO(n)Mu
CommunicationO(n log (n))Hash + R*O(n)Bitmap + R*
ComputationVO(n log (n))HashO(n)Ex
AggregateComputationSN/AN/A
CommunicationN/AN/A
N/AN/AN/A
UpdateComputationSO(log n)HashN/A
CommunicationO(1)O(1)
ComputationOO(log n)HashO(m)Ex
+ +Table 2: Comparison of asymptotic performance, where Hash is 160 bits, Sig is 1024 bits, AggSig= 160 bits, Tag= 792 bits, Bitmap is a small constant, Ex denotes modular exponentiation, Mu denotes modular multiplication Pairing denotes pairing operation, k is the number of attributes in projection query, attribute is an attribute value in R, R* denotes unmatched tuples in R, and assume |R| = |P| = n in join query. Note that our solution supports aggregate queries and more flexible join queries, and we do not count the basic search operation in the comparison. + +Figure 2: Comparison of storage overhead + +Hash plus $O(n)$Ex, $O(n)$Ex, and $O(n)$Ex at the querier side in order to aggregate HLT tags. However, our solution still outperforms [23], which incurs respective computational complexity $O(n)$Ex, $O(kn)$Ex, and $O(n)$Ex on bilinear group. + +Regarding projection queries and join queries, our solutions requires $O(n+m)$ tags. In contrast, [13] requires $O((m-k)n)$ attribute values for projection queries, and $O(n \log n)$ hash values plus those unmatched tuples in R for join queries. Although [23] only requires $O(n)$ certified bitmap (recording updated tuples in on update period) for projection queries, it requires at least $O(n)$ certified bitmap plus those unmatched tuples in R for join queries. It is due to the fact that [13, 23] have to fetch at least one table (either R or P) for join queries. + +Regarding aggregate queries, the computational and communication complexities are the same regardless of the number of attributes the querier wants to aggregate. For example, our solution incurs the same complexity for queries such as: "select SUM($A_1$), + +..., SUM($A_k$) from R where $A_1 > a$ and $A_1 < b$" and "select SUM($A_1$) from R where $A_1 > a$ and $A_1 < b$". + +## 6. INTEGRATE QUERY INTEGRITY AND STORAGE INTEGRITY + +The accompanying concept to query integrity is *storage integrity*, namely the assurance that the outsourced data is kept intact in the cloud. Elegant solutions to storage integrity include Provable Data Possession (PDP) [2] and Proof of Retrievability (POR) [12]. In particular, PDP can achieve *constant* computational and communication complexities in the static setting [2], and *logarithmic* computational and communication complexities in the dynamic setting [8]. + +A systematic solution should assure both query complexity and storage complexity. Intuitively, query integrity is more demanding than storage integrity because storage integrity does not have to deal with the structure of database. However, one cannot simply adapt PDP/POR techniques to the setting of outsourced database because they deal with unstructured data. In what follows, we sketch a solution that integrates PDP-flavor storage integrity with respect to the *logical* structure of the outsourced database (rather than the physical structure of the database). The solution is not optimal because it incurs communication complexity of $O(n)$ tags and computational complexity of $O(n)$ exponentiations. We defer a detailed analysis of the following solution to an expanded version of this paper. + +Specifically, for a table R with schema ($A_1, ..., A_m$), where $A_i$ is the primary key (ID) that uniquely identifies a tuple. In order to ensure storage integrity, the database storage integrity auditor (e.g., the data owner or a third party) can perform the procedure shown in Figure 3. Since the auditor has no knowledge about R, it fetches \ No newline at end of file diff --git a/samples/texts/7449877/page_11.md b/samples/texts/7449877/page_11.md new file mode 100644 index 0000000000000000000000000000000000000000..d52dd74a9b1c6873481b5e2e5fd52b2123115f86 --- /dev/null +++ b/samples/texts/7449877/page_11.md @@ -0,0 +1,51 @@ +Figure 3: Procedures to ensure storage integrity + +and verifies integrity of all primary keys (IDs) in $\mathbb{R}$ and their tags, which is showed in Steps 1 and 2. Then the auditor randomly selects $t$ primary IDs from the set of IDs and $t$ coefficients, denoted by $(r_{b_1} \cdot A_i, c_1), \dots, (r_{b_t} \cdot A_i, c_t)$, and asks the server to compute an aggregate tuple $r = \sum_{i=1}^{t} c_i r_{b_i}$. Suppose $\sigma$ is the aggregated tag of $\sigma_{b_1}, \dots, \sigma_{b_t}$ with coefficients $c_1, \dots, c_t$, the auditor can verify storage integrity by running $\Lambda_{HLT} \cdot Vrfy(\Lambda_{HLT} \cdot pk, r, \sigma)$. If the output is 1, the storage assurance can be guaranteed with $(1-f)^t$ confidence, where $f$ is the fraction of the corrupted tuples. + +## 7. CONCLUSION + +We presented an efficient solution to the problem of query integrity in the setting of outsourced dynamic databases. Query integrity allows a querier, the data owner or a third party, to verify that its queries were faithfully executed by the cloud server. Compared with the state-of-the-art solutions, our solution is: (i) more powerful by additionally supporting aggregate queries (in addition to selection, projection, and join queries), and (ii) more efficient by eliminating a logarithmic (or even linear) multiplication factor from the overall cost (depending on the type of the queries). + +Our solution still incurs linear complexity. A notable direction for future research is to address the following open problem: Can we attain query integrity logarithmic (or constant) complexity as in the case of assuring storage integrity? + +### Acknowledgement + +Qingji Zheng and Shouhuai Xu were supported in part by an AFOSR MURI grant and a NSF grant. Giuseppe Ateniese was supported in part by a Google Research Award and an IBM Faculty Award. + +## 8. REFERENCES + +[1] B. Applebaum, Y. Ishai, and E. Kushilevitz. From secrecy to soundness: Efficient verification via secure computation. In S. Abramsky, C. Gavoille, C. Kirchner, F. Meyer auf der Heide, and P. Spirakis, editors, *Automata, Languages and Programming*, volume 6198 of Lecture Notes in Computer Science, pages 152–163. Springer Berlin / Heidelberg, 2010. 10.1007/978-3-642-14165-2_14. + +[2] G. Ateniese, R. Burns, R. Curtmola, J. Herring, L. Kissner, Z. Peterson, and D. Song. Provable data possession at untrusted stores. In *Proceedings of the 14th ACM conference on Computer and communications security*, CCS '07, pages 598–609, New York, NY, USA, 2007. ACM. + +[3] G. Ateniese, S. Kamara, and J. Katz. Proofs of storage from homomorphic identification protocols. In *Proceedings of the 15th International Conference on the Theory and Application of Cryptology and Information Security: Advances in Cryptology*, ASIACRYPT '09, pages 319–333, Berlin, Heidelberg, 2009. Springer-Verlag. + +[4] M. Bellare and G. Neven. Multi-signatures in the plain public-key model and a general forking lemma. In *ACM Conference on Computer and Communications Security*, pages 390–399, 2006. + +[5] D. Boneh, C. Gentry, B. Lynn, and H. Shacham. Aggregate and verifiably encrypted signatures from bilinear maps. In *Proceedings of the 22nd international conference on Theory and applications of cryptographic techniques*, EUROCRYPT'03, pages 416–432, Berlin, Heidelberg, 2003. Springer-Verlag. + +[6] K.-M. Chung, Y. Kalai, and S. Vadhan. Improved delegation of computation using fully homomorphic encryption. In *Proceedings of the 30th annual conference on Advances in cryptology*, CRYPTO'10, pages 483–501, Berlin, Heidelberg, 2010. Springer-Verlag. + +[7] P. Devanbu, M. Gertz, C. Martel, and S. G. Stubblebine. Authentic data publication over the internet. *J. Comput. Secur.*, 11(3):291–314, Apr. 2003. + +[8] C. Erway, A. Küpçü, C. Papamanthou, and R. Tamassia. Dynamic provable data possession. In *Proceedings of the 16th ACM conference on Computer and communications security*, CCS '09, pages 213–222, New York, NY, USA, 2009. ACM. + +[9] A. Fiat. Batch rsa. In *Proceedings on Advances in cryptology*, CRYPTO '89, pages 175–185, New York, NY, USA, 1989. Springer-Verlag New York, Inc. + +[10] R. Gennaro, C. Gentry, and B. Parno. Non-interactive verifiable computing: outsourcing computation to untrusted workers. In *Proceedings of the 30th annual conference on Advances in cryptology*, CRYPTO'10, pages 465–482, Berlin, Heidelberg, 2010. Springer-Verlag. + +[11] M. Goodrich, R. Tamassia, and N. Triandopoulos. Super-efficient verification of dynamic outsourced databases. In T. Malkin, editor, *Topics in Cryptology IC CT-RSA 2008*, volume 4964 of Lecture Notes in Computer Science, pages 407–424. Springer Berlin / Heidelberg, 2008. + +[12] A. Juels and B. S. Kaliski, Jr.: Pors: proofs of retrievability for large files. In *Proceedings of the 14th ACM conference on Computer and communications security*, CCS '07, pages 584–597, New York, NY, USA, 2007. ACM. + +[13] F. Li, M. Hadjieleftheriou, G. Kollios, and L. Reyzin. Dynamic authenticated index structures for outsourced databases. In *Proceedings of the 2006 ACM SIGMOD international conference on Management of data*, SIGMOD '06, pages 121–132, New York, NY, USA, 2006. ACM. + +[14] F. Li, M. Hadjieleftheriou, G. Kollios, and L. Reyzin. Authenticated index structures for aggregation queries. *ACM Trans. Inf. Syst. Secur.*, 13(4):32:1–32:35, Dec. 2010. + +[15] R. C. Merkle. A certified digital signature. In *Proceedings on Advances in cryptology*, CRYPTO '89, pages 218–238, New York, NY, USA, 1989. Springer-Verlag New York, Inc. + +[16] K. Mouratidis, D. Sacharidis, and H. Pang. Partially materialized digest scheme: an efficient verification method for outsourced databases. *The VLDB Journal*, 18(1):363–381, Jan. 2009. + +[17] E. Mykletun, M. Narasimha, and G. Tsudik. Providing authentication and integrity in outsourced databases using merkley hash trees. In *UCI-SCONCE Technical Report*, 2003. + +[18] E. Mykletun, M. Narasimha, and G. Tsudik. Authentication and integrity in outsourced databases. *Trans. Storage*, 2(2):107–138, May 2006. \ No newline at end of file diff --git a/samples/texts/7449877/page_12.md b/samples/texts/7449877/page_12.md new file mode 100644 index 0000000000000000000000000000000000000000..ffa88b5b4627f976074afe55fe45608ebb26cc17 --- /dev/null +++ b/samples/texts/7449877/page_12.md @@ -0,0 +1,26 @@ +[19] M. Narasimha and G. Tsudik. Authentication of outsourced databases using signature aggregation and chaining. In *Proceedings of the 11th international conference on Database Systems for Advanced Applications*, DASFAA'06, pages 420–436, Berlin, Heidelberg, 2006. Springer-Verlag. + +[20] G. Nuckolls. Verified query results from hybrid authentication trees. In *Proceedings of the 19th annual IFIP WG 11.3 working conference on Data and Applications Security*, DBSec'05, pages 84–98, Berlin, Heidelberg, 2005. Springer-Verlag. + +[21] B. Palazzi, M. Pizzonia, and S. Pucacco. Query racing: fast completeness certification of query results. In *Proceedings of the 24th annual IFIP WG 11.3 working conference on Data and applications security and privacy*, DBSec'10, pages 177–192, Berlin, Heidelberg, 2010. Springer-Verlag. + +[22] H. Pang, A. Jain, K. Ramamritham, and K.-L. Tan. Verifying completeness of relational query results in data publishing. In *Proceedings of the 2005 ACM SIGMOD international conference on Management of data*, SIGMOD '05, pages 407–418, New York, NY, USA, 2005. ACM. + +[23] H. Pang, J. Zhang, and K. Mouratidis. Scalable verification for outsourced dynamic databases. *Proc. VLDB Endow.*, 2(1):802–813, Aug. 2009. + +[24] C. Papamanthou, R. Tamassia, and N. Triandopoulos. Authenticated hash tables. In *Proceedings of the 15th ACM conference on Computer and communications security*, CCS '08, pages 437–448, New York, NY, USA, 2008. ACM. + +[25] C. Papamanthou, R. Tamassia, and N. Triandopoulos. Optimal verification of operations on dynamic sets. In *Proceedings of the 31st annual conference on Advances in cryptology*, CRYPTO'11, pages 91–110, Berlin, Heidelberg, 2011. Springer-Verlag. + +[26] H. Shacham and B. Waters. Compact proofs of retrievability. In *Proceedings of the 14th International Conference on the Theory and Application of Cryptology and Information Security: Advances in Cryptology*, ASIACRYPT '08, pages 90–107, Berlin, Heidelberg, 2008. Springer-Verlag. + +[27] R. Tamassia and N. Triandopoulos. Certification and authentication of data structures. In *AMW*, 2010. + +[28] M. Xie, H. Wang, J. Yin, and X. Meng. Integrity auditing of outsourced data. In *Proceedings of the 33rd international conference on Very large data bases*, VLDB '07, pages 782–793. VLDB Endowment, 2007. + +[29] J. XU and E.-C. CHANG. Authenticating aggregate range queries over multidimensional dataset. Cryptology ePrint Archive, Report 2010/050, 2010. +http://eprint.iacr.org/. + +[30] Y. Yang, D. Papadias, S. Papadopoulos, and P. Kalnis. Authenticated join processing in outsourced databases. In *Proceedings of the 35th SIGMOD international conference on Management of data*, SIGMOD '09, pages 5–18, New York, NY, USA, 2009. ACM. + +[31] Y. Yang, S. Papadopoulos, D. Papadias, and G. Kollios. Spatial outsourcing for location-based services. In *Proceedings of the 2008 IEEE 24th International Conference on Data Engineering*, ICDE '08, pages 1082–1091, Washington, DC, USA, 2008. IEEE Computer Society. \ No newline at end of file diff --git a/samples/texts/7449877/page_2.md b/samples/texts/7449877/page_2.md new file mode 100644 index 0000000000000000000000000000000000000000..e67724fc0bf20d41bbdf4d4b76026714046b00aa --- /dev/null +++ b/samples/texts/7449877/page_2.md @@ -0,0 +1,39 @@ +of [23], where $k \le m$ is the number of attributes involved in +the projection operation. + +Our solution incurs an $O(n+m)$ communication complexity, +which is the same as in [23] but much more efficient than +the $O((m-k)n)$ of [13], where $k \le m$ is the number of +attributes involved in the projection operation. + +* For join queries with respect to two tables of *n* tuples and *m* attributes, our solution incurs $O(n)$ modular exponentiations at the querier side, which is not as efficient as the $O(n \log n)$ hash operations of [13], but more efficient than $O(n)$ exponentiation operations on bilinear map of [23]. + +Our solution incurs the communication complexity of $O(n+m)$ tags, which is more efficient than the $O(n(\log n))$ hash values of [13] and comparable to the $O(n)$ of [23]. + +The efficiency of our solution mainly comes from the second building-block mentioned above, which is weaker than the Homomorphic Linear Authenticator introduced in [2] and may be of independent value. + +## 1.2 Related Work + +The problem of assuring query integrity in the context of out-sourced data was fundamentally related to the concept of certified data structures [27], which presents some results that are concep-tually important but not efficient. The state-of-the-art solutions to query integrity are due to [13, 23], which are the only solutions that support selection, projection and join queries simultaneously. These two solutions follow two respective approaches to the query integrity problem. + +* The tree-based approach: Basically, this approach uses the Merkle hash tree [15] or its variants to index search keys [11, 17, 13, 7, 16, 31, 20, 21]. As a result, this approach leads to logarithmic complexity in terms of both communication and verification, possibly with some further tricks (e.g., using the Merkle hash tree to maintain signatures at multiple hash tree levels [11]). The best solution in this approach is due to [13], which uses the Merkle B-tree and the Embedded Merkle B-tree in order to reduce I/O operations. + +* The signature-based approach: Basically, this approach uses the signature aggregation technique [5, 18] to aggregate the validity of query answers [18, 19, 23, 22]. As a result, this approach can lead to low (even constant) communication complexity, but may require special treatment for handling more powerful (e.g., projection) queries and often leads to large storage and computational complexities. The best solution in this approach is due to [23], which uses aggregate signatures to sign each attribute and returns a single signature as the validity proof for projection queries. This solution uses a chaining signing technique to build the index for the search key so as to facilitate range queries, and publishes a certified bitmap corresponding to every update so as to facilitate dynamic updates. These cause a large storage and communication overhead while including many exponentiations and pairing operations. + +There are studies that are somewhat related to the theme of the present paper as well. These include: authenticating the answers to set operations using accumulator [25], authenticating the answers to aggregate queries using authenticated prefix-sums trees [14], authenticating the answers to join queries [30], authenticating count queries with respect to multi-dimensional data while preserving + +privacy [29], and assuring probabilistic integrity in selection and join operations [28]. Query integrity is also somewhat related to outsourced verifiable computation [1, 6, 10]. + +**Paper outline.** + +The rest of the paper is organized as follows. Section 2 presents the functional and security definitions of outsourced dynamic database with the requirement of query integrity. Section 3 describes the first building-block, and Section 4 describes the second building-block. Section 5 presents the main construction of authenticated outsourced dynamic database and analyzes its security and efficiency. Section 6 presents an extension of the construction to accommodate storage integrity of outsourced dynamic database. Section 7 concludes the paper with future research directions. + +# 2. QUERY INTEGRITY FOR OUTSOURCED DYNAMIC DATABASES: DEFINITIONS + +In the context of the present paper, a relational database consists of multiple tables, and each table has multiple tuples and multiple attributes. As shown in Figure 1, an outsourced database system has three participants: data owner (who outsources its database to the cloud), database server (i.e., the cloud), and database queriers (e.g., business partners of the data owner). The data owner uses a management interface to outsource its database to the cloud, including dynamic updates of the database. There is also a query interface, which can be used by any third party, including the data owner itself if desired. + +Figure 1: Outsourced dynamic database system model. + +Intuitively, query integrity means that any query *qry* is faithfully executed with respect to the database *D*. If we treat a query *qry* as a function, the querier should be able to verify that the answer to its query is indeed *qry(D)*. The concern is legitimate because the cloud may execute the query *qry* with respect to *D'*, where *D' ≠ D* because (for example) the cloud vendor may use an outdated version of *D* rather than the up-to-date one, or *D' ⊂ D* because the cloud vendor wants to spend less resources on searching the entire *D*. Moreover, the cloud may return the answer to a modified query *qry'* on database *D* or even some *D' ≠ D*. As a concrete example, a query *qry* asks for the tuples with some attribute values that belong to the interval [10, 100], but the cloud actually returns the tuples whose attribute values belong to the smaller interval [10, 20]. Without assuring query integrity, the querier cannot tell whether the returned answer is indeed *qry(D)* or some *qry'(D')*. + +In what follows, we present the functional and security definitions of Authenticated Outsourced Dynamic Database (AuthDDB), which was somewhat inspired by the definitions of Authenticated \ No newline at end of file diff --git a/samples/texts/7449877/page_3.md b/samples/texts/7449877/page_3.md new file mode 100644 index 0000000000000000000000000000000000000000..0df564d2c59ba1dd03d64437c4cbfda531f7bcef --- /dev/null +++ b/samples/texts/7449877/page_3.md @@ -0,0 +1,71 @@ +Data Structures that allow verifiable queries over dynamic sets [24, 25]. + +**Definition 1.** (AuthDDB) Let $D$ be a database outsourced to the server. An AuthDDB scheme consists of the following algorithms: + +* **KeyGen:** This algorithm takes as input the primary security parameter $\ell$, and outputs a pair of private and public keys $(sk, pk)$. We denote this by + +$$ (sk, pk) \leftarrow \text{KeyGen}(1^{\ell}). $$ + +* **Setup:** This algorithm is executed by a data owner $O$ before outsourcing its database $D$ to the server. By taking as input the private key $sk$ and the database $D$, this algorithm outputs some cryptographic auxiliary information Au and state information State. Both $D$ and Au will be outsourced to the server and State will be made public (so as to allow third parties to verify the query answers). We denote this by + +$$ (\text{State}, \text{Au}, D) \leftarrow \text{Setup}(sk, D) $$ + +* **Update:** This protocol is executed between a data owner $O$ and the server $S$ to perform update operations, the detail of which is described by Upd. By taking as input the private key $sk$ and the current state information State, the data owner interacts with the server, which takes as input the stored data $D$ and the cryptographic auxiliary information Au. The data owner $O$ updates its state information to State' from the update information Upd, and the server obtains Au' and $D'$ by updating the stored database accordingly. We denote the protocol by + +$$ (\text{Au}', \text{State}', D') \leftarrow (\text{O}(sk, \text{State}, \text{Upd}) \leftrightarrow \text{S}(\text{Au}, D)) $$ + +* **QueryVrfy:** This is a protocol between a querier $Q$, which issues a SQL query `qry`, and the server $S$, which answers the query with the result `Rst` and a proof `Prf`. The querier verifies the result `Rst` with `Prf`, and outputs reject if `Rst` is not valid with respect to the query `qry` and the state `State`; otherwise, the querier accepts `Rst` and `Prf`. We denote the protocol by + +$$ \{( reject ), ( accept, Rst, Prf )\} \leftarrow ( Q(pk, qry, State) \leftrightarrow S(Au, D) ) $$ + +We require an AuthDDB scheme to be correct, meaning that for any honest server, $(sk, pk) \leftarrow \text{KeyGen}(1^\ell)$, $(\text{State}, \text{Au}, D) \leftarrow \text{Setup}(sk, D)$, polynomial-many executions of the Update protocol, and a query `qry`, it holds that + +$$ (\text{accept}, \text{Rst}, \text{Prf}) \leftarrow (\mathcal{Q}(pk, qry, \text{State}) \leftrightarrow \mathcal{S}(\text{Au}, D)) $$ + +We require an AuthDDB scheme to be sound, meaning that no malicious server can return incorrect query answers without being detected by the querier. Specifically, we say an AuthDDB scheme is sound if for any query `qry` on database `D`, the server can not return an incorrect `Rst` such that + +$$ (\text{accept}, \text{Rst}, \text{Prf}) \leftarrow (\mathcal{Q}(pk, qry, \text{State}) \leftrightarrow \mathcal{S}(\text{Au}, D)). $$ + +Formally, + +**Definition 2.** (soundness of AuthDDB) Let $\Lambda = (\text{KeyGen}, \text{Setup}, \text{Update}, \text{QueryVrfy})$ be an AuthDDB scheme and $\mathcal{A}$ be a probabilistic polynomial-time adversary. Consider the following security game between a challenger and $\mathcal{A}$. + +* The challenger runs $(sk, pk) \leftarrow \text{KeyGen}(1^\ell)$ and give $pk$ to the adversary $\mathcal{A}$. + +* $\mathcal{A}$ makes oracle access to Setup, by presenting a database $D_0$. The challenger computes + +$$ (\text{State}_0, \text{Au}_0, D_0) \leftarrow \text{Setup}(sk, D_0), $$ + +and gives $\text{State}_0, \text{Au}_0$ to $\mathcal{A}$. The challenger makes $\text{State}_0$ public. + +* $\mathcal{A}$ asks for updating $D_0$ adaptively with $\text{Upd}_i$, $i \ge 0$. The challenger computes + +$$ ( \text{Au}_{i+1}, \text{State}_{i+1}, D_{i+1} ) \leftarrow ( \mathcal{O}(sk, \text{State}_i, \text{Upd}_i, \text{Au}_i, D_i) \leftrightarrow \mathcal{S}(\text{Au}_i, D_i) ). $$ + +* $\mathcal{A}$ may execute QueryVrfy polynomial-many times. Eventually, $\mathcal{A}$ outputs a query `qry` and a query result `Rst` with proof `Prf`. + +* $\mathcal{A}$ wins the game if + +$$ (\text{accept}, \text{Rst}, \text{Prf}) \} \leftarrow ( Q(pk, qry, \text{State}_k) \leftrightarrow S(\text{Au}_k, D_k) ) $$ + +for some $k \ge 0$ and $\text{Rst} \neq \text{localRst}$, where $\text{localRst} \leftarrow \text{LocalQuery}(\text{qry}, D_k)$ is produced by the challenger that faithfully executes query `qry` on database $D_k$. + +We say that $\Lambda$ is sound if any polynomial-time algorithm $\mathcal{A}$ can win the game with at most a negligible probability. + +# 3. BUILDING-BLOCK I: AUTHENTICATED OUTSOURCED ORDERED DATA SET (AUTHODS) + +In this section, we introduce a building block for assuring range query integrity on ordered data set that is outsourced to the server. This building-block is called Authenticated Outsourced Ordered Data Set (AuthODS), which is similar to AuthDDB. + +## 3.1 Definition of AuthODS + +**Definition 3.** (AuthODS) Let $E$ be an ordered data set. An AuthODS scheme consists of the following algorithms, which are similar to those in Definition 1: + +* **KeyGen:** This key generation algorithm generates the public/private key as KeyGen in Definition 1. + +* **Setup:** This setup algorithm is the same as Setup in Definition 1, except that the database is replaced with an ordered set $E$. + +* **Update:** This update protocol proceeds is the same as Update in Definition 1, except that the update operations are element insertion/deletion/update on the ordered data set $E$. + +* **QueryVrfy:** This query protocol is the same as QueryVrfy in Definition 1, except that it only supports range query `qry`$(a, b)$ that asks for all elements in the interval $[a, b]$. + +The correctness of AuthODS can be defined similar to that of AuthDDB scheme. \ No newline at end of file diff --git a/samples/texts/7449877/page_4.md b/samples/texts/7449877/page_4.md new file mode 100644 index 0000000000000000000000000000000000000000..607de74cb792bd77d5f9282a6ed3f1520d73f9c2 --- /dev/null +++ b/samples/texts/7449877/page_4.md @@ -0,0 +1,40 @@ +*Definition 4.* (soundness of AuthODS) For an AuthODS scheme, $\Lambda = (\text{KeyGen}, \text{Setup}, \text{Update}, \text{QueryVrfy})$, we consider the security game as in Definition 2, except that (i) the initial database is replaced with an ordered set $E$, (ii) the update operation is element insertion, deletion or update on the ordered data set, and (iii) the queries are only range queries $qry(a, b)$ that ask for elements in the interval $[a, b]$. We say that $\Lambda$ is sound if any polynomial-time algorithm $\mathcal{A}$ can win the game with at most a negligible probability. + +## 3.2 Construction and Analysis of AuthODS: Merkle B-Tree + +Now we describe an AuthODS scheme, which is a Merkle B-tree (MB-tree) and has been extensively studied in [13, 17]. Merkle B-tree applies the basic idea of Merkle tree on a $B^+$ tree structure, where the operations on Merkle B-tree (e.g., insertion and deletion) are similar to those on $B^+$ tree. The primary advantage of $B^+$ tree is that it has a large fan-out, which can reduce the number of I/O operations when searching for an element [13]. Let $\text{Sig} = (\text{KeyGen}, \text{Sign}, \text{Verify})$ be a secure signature scheme. Let $E$ be an ordered set. The Merkle B-tree scheme consists of algorithms as follows: + +* $(sk, pk) \leftarrow \text{KeyGen}(1^\ell)$: This algorithm runs $\text{Sig.KeyGen}(1^\ell)$ to obtain a pair of private and public keys $(sk, pk)$. + +* (State, Au) $\leftarrow \text{Setup}(sk, E)$: This algorithm outputs a succinct signature which can be used for verification. The structure of Merkle B-tree $\mathcal{T}$ is similar to $B^+$ tree, where the leaves store elements in the ordered set $E$, and the values of internal nodes are computed from the concatenation of the values of their children via an appropriate hash function. The root of the tree will be signed to produce the state information, denoted by $\text{State} = \text{Sig SIGN}(\mathcal{T})$ and $\text{Au} = \mathcal{T}$. + +* **Update:** The update protocol fulfills update operations. For simplicity, we consider the example of the replacement operation while assuming that the replacement preserves the order of the elements. We refer to [13] for details about the insertion and deletion operations. Suppose Upd = "update the element $E_i$ to $E'_i$". Upon receiving Upd from the data owner, the server updates $E$ to $E'$ by replacing $E_i$ with $E'_i$, and updates $\mathcal{T}$ to $\mathcal{T}'$. The server provides a proof, a path of $E_i$ in $\mathcal{T}$, namely a sequence including values of the nodes from $E_i$ to the root of MB-tree as well as the values of these nodes' siblings. The data owner can hash the path of $E_i$ from the bottom to the top and verify whether the root is valid with respect to state State or not. If so, the data owner updates the path from the bottom to the top by replacing $E_i$ with $E'_i$, which will result in a new root, signs the new root, and sets $\text{State}' = \text{Sig.SIGN}(\mathcal{T}')$; otherwise, the data owner aborts. + +* **QueryVrfy:** Given a range query $qry(a, b)$, the server outputs a proof Prf showing that Rst contains all elements in $[a, b]$. + + - If Rst is empty, which means there exists some $s$, such that $E_s < a, b < E_{s+1}$. The server returns the proof Prf including two paths: a path of $E_s$ and a path of $E_{s+1}$. The querier hashes each path from bottom to the top, and verify whether the roots match the state State, and $E_s$ is neighbor to $E_{s+1}$. If so, the querier returns the null set Rst, Prf, and accept. Otherwise, abort. + - If Rst is not null, suppose the query result is $(E_s, \dots, E_t), s \le t$. The server returns the proof Prf including two paths: one path of the left-most neighbor leaf + +of $E_s$, and the other path of the right-most leaf of $E_t$. Then the querier uses Prf and the result Rst to construct a $B^+$ tree, and verifies whether the root of this $B^+$ tree is valid for State = $\text{Sig SIGN}(\mathcal{T})$. If so, the querier returns (Rst, Prf, accept); otherwise, the querier aborts. + +**THEOREM 1.** *Assuming that Sig is a secure signature scheme and the hash function is collision resistant, the Merkle B-tree scheme is sound with respect to Definition 4.* + +## 4. BUILDING-BLOCK II: HOMOMORPHIC LINEAR TAG (HLT) + +Now we present the second building block, HLT. Intuitively, HLT offers the following property: If messages $M_1, \dots, M_n$ are respectively tagged with $\sigma_1, \dots, \sigma_n$ using some cryptographic function, then for coefficients $c_1, \dots, c_n$ in a pre-defined coefficient space, the aggregate message $M = \sum_{i=1}^n c_i M_i$ can be verified via the aggregate tag $\sigma$ of $\sigma_1, \dots, \sigma_n$ and the coefficients $c_1, \dots, c_n$. HLT can be divided into two types: + +* Publicly verifiable HLT: It allows anyone (without knowing any secret) to verify the validity of tags. In order to allow any third party to verify query integrity, this type of HLT is needed for the purpose of the present paper. + +* Privately verifiable HLT: It allows someone who knows the relevant secret to verify the validity of tags. Putting this into the context of the present paper, this type of HLT can be used to allow the data owner (but not third parties) to verify query integrity. Therefore, this type of HLT will not be discussed further in the paper. + +The concept of HLT was inspired by the notion of Homomorphic Linear Authenticator (HLA), which was formally introduced in [3]. The difference between them is that HLT is weaker than HLA because HLT only considers attacks that do not attempt to tamper the individual tags (which is dealt with by another layer of protection for free, namely by the first building-block); whereas, HLA explicitly accommodates attacks that aim to tamper the individual tags. This makes it possible to construct HLT schemes that are more efficient that their HLA counterparts. It is worthwhile to point out the following feature of HLT and HLA: the aggregated message $M$ and the aggregated tag $\sigma$ are sufficient to allow the verifier to test their validity *without* knowing the individual messages $M_1, \dots, M_n$. This is not the case for aggregate signatures [5], batch RSA [9], and condensed RSA [18], which are not sufficient for the purpose of HLT or HLA. + +### 4.1 Definitions of HLT + +*Definition 5.* (publicly verifiable HLT) A publicly verifiable HLT scheme consists of the following algorithms: + +* $(pk, sk) \leftarrow \text{KeyGen}(1^\ell)$: This algorithm takes as input a security parameter $\ell$, and outputs a pair of public and private keys $(pk, sk)$. It may optionally specify a coefficient domain $\mathcal{C}$ and a message space $\mathcal{M}$. + +* $\sigma_i \leftarrow \text{TagGen}(sk, M_i)$: This algorithm takes as input the private key sk and a message $M_i \in \mathcal{M}$, and outputs a tag $\sigma_i$ for $M_i$. + +* $\sigma \leftarrow \text{HLTAgg}(\vec{c}, \overline{\text{Tag}})$: This linear aggregation algorithm takes as input a vector of tags $\overline{\text{Tag}} = (\sigma_1, \dots, \sigma_n)$ with respect to a vector of messages $\vec{M} = (M_1, \dots, M_n)$ and a \ No newline at end of file diff --git a/samples/texts/7449877/page_5.md b/samples/texts/7449877/page_5.md new file mode 100644 index 0000000000000000000000000000000000000000..aaa87e58786effd48b8aaf312a2fa58c85afaa1d --- /dev/null +++ b/samples/texts/7449877/page_5.md @@ -0,0 +1,76 @@ +vector of coefficients $\vec{c} = (c_1, \dots, c_n)$. It outputs an aggregate tag $\sigma$ with respect to the aggregated message $M = \sum_{i=1}^n c_i M_i$. + +* $\{\text{0}, \text{1}\} \leftarrow \text{Vrfy}(pk, M', \sigma')$: This deterministic algorithm takes as input the public key $pk$, a candidate message $M'$, and a tag $\sigma'$. It outputs 1 if $\sigma'$ is valid with respect to $M'$, and outputs 0 otherwise. + +We require a HLT scheme to be correct, meaning that any faithfully aggregated message $M$ and tag $\sigma$ are always accepted as valid. Formally, this means that for $(pk, sk) \leftarrow \text{KeyGen}(1^\ell)$, $\tilde{M} = (M_1, \dots, M_n) \in \mathcal{M}^n$, $\text{Tag} = (\sigma_1, \dots, \sigma_n)$ where $\sigma_i \leftarrow \text{TagGen}(sk, M_i)$ for $1 \le i \le n$, and $\vec{c} = (c_1, \dots, c_n) \in \mathcal{C}^n$, then $\sigma \leftarrow \text{HLTAgg}(\vec{c}, \text{Tag})$ implies $1 \leftarrow \text{Vrfy}(pk, \sum_{i=1}^n c_i M_i, \sigma)$. + +The intuition behind the following security definition of HLT is: for any tag $\sigma$ generated for message $M$, there is no probabilistic polynomial time adversary that can present $M' \neq M$ such that $1 \leftarrow \text{Vrfy}(pk, M', \sigma)$. Formally, we have: + +**Definition 6.** (security of HLT) Let $\Lambda = (\text{KeyGen}, \text{TagGen}, \text{HLTAgg}, \text{Vrfy})$ be a HLT and $\mathcal{A}$ be a probabilistic polynomial-time adversary. Consider the following security game between a challenger and $\mathcal{A}$: + +1. The challenger runs $(pk, sk) \leftarrow \text{KeyGen}(1^\ell)$ and gives $pk$ to $\mathcal{A}$. The optional coefficient domain $C$ and the message space $\mathcal{M}$ are specified by KeyGen. + +2. $\mathcal{A}$ may make oracle queries to TagGen by adaptively selecting $M_1, \dots, M_n$ from $\mathcal{M}$. The challenger computes $\sigma_i \leftarrow \text{TagGen}(sk, M_i)$ for $1 \le i \le n$ and returns tags $(\sigma_1, \dots, \sigma_n)$ to $\mathcal{A}$. The challenger keeps the lists of messages and tags: $(M_1, \dots, M_n)$ and $(\sigma_1, \dots, \sigma_n)$. + +3. $\mathcal{A}$ may make oracle queries to HLTAgg by selecting a vector of coefficients $\vec{c} = (c_1, \dots, c_n)$, obtain the aggregate tag $\sigma$, and run Vrfy with the aggregate tag $\sigma$ and the aggregated message $\sum_{i=1}^n c_i M_i$. This can be performed polynomially many times. + +4. Eventually, $\mathcal{A}$ selects a vector of coefficients $\vec{c} = (c_1, \dots, c_n)$, where $c_i \in C$, and some $M' \in \mathcal{M}$. + +5. The adversary $\mathcal{A}$ wins the game if $1 \leftarrow \text{Vrfy}(pk; M', \sigma)$ and $M' \neq \sum_{i=1}^n c_i M_i$, where $\sigma \leftarrow \text{HLTAgg}(\vec{c}, \text{Tag})$ was computed by the challenger, where $\text{Tag} = (\sigma_1, \dots, \sigma_n)$ corresponds to the message vector $(M_1, \dots, M_n)$ that can be identified by the coefficient vector $\vec{c} = (c_1, \dots, c_n)$ provided by the adversary $\mathcal{A}$. + +We say $\Lambda$ is secure if no probabilistic polynomial-time algorithm $\mathcal{A}$ can win the game with a non-negligible probability in the security parameter $\ell$. + +From the security game, we observe that the adversary $\mathcal{A}$ is only allowed to manipulate the messages $M_1, \dots, M_n$ but not the tags. This further explains why HLT is weaker than the aforementioned HLA (Homomorphic Linear Authenticator) [2, 3, 26], where the adversary can manipulate both messages and tags. This can be stated as: + +LEMMA 1. Any secure HLA scheme as defined in [3] is also a secure HLT scheme as defined above. + +## 4.2 Construction and Analysis of HLT + +We present a HLT scheme whose security is based on the Discrete Logarithm (DLOG) problem. The scheme consists of the following algorithms. + +* $(sk, pk) \leftarrow \text{KeyGen}(1^\ell):$ + +1. Let $q$ be a $\ell$-bit prime and $p$ be another large prime such that $q|(p-1)$. + +2. Select $v_1$ and $v_2$ uniformly at random from $Z_p^*$ such that the order of $v_1$ and $v_2$ is $q$ + +3. Select $s_{j1}, s_{j2}$ uniformly at random from $Z_q^*$ and set $z_j = v_1^{-s_{j1}} v_2^{-s_{j2}} \mod p$, for $1 \le j \le m$. + +4. Let $sk = \{(s_{11}, s_{12}), \dots, (s_{m1}, s_{m2})\}$ and $pk = \{v_1, v_2, z_1, \dots, z_m\}$. + +5. The coefficient domain $C$ is $[0, q)$ and the message space is $\mathcal{M} = [0, q)^m$. + +* $\sigma_i \leftarrow \text{TagGen}(sk, M_i)$: For $M_i \in \mathcal{M}$, the tag $\sigma_i$ is computed by selecting $r_1, r_2$ uniformly at random from $Z_q^*$ and: + +$$x = v_1^{r_1} v_2^{r_2} \mod p,$$ + +$$y_1 = r_1 + \sum_{j=1}^{m} M_i[j] s_{j1} \mod q,$$ + +$$y_2 = r_2 + \sum_{j=1}^{m} M_i[j] s_{j2} \mod q.$$ + +Let $\sigma_i = (x, y_1, y_2)$. + +* $\sigma \leftarrow \text{HLTAgg}(\vec{c}, \text{Tag})$: Given tags $\text{Tag} = (\sigma_1, \dots, \sigma_n)$ with $\sigma_i = (x_i, y_{i1}, y_{i2})$, and $\vec{c} = (c_1, \dots, c_n)$, the aggregate tag $\sigma = (x, y_1, y_2)$ is computed as: + +$$x = \prod_{i=1}^{n} x_i^{c_i} \mod p,$$ + +$$y_1 = \sum_{i=1}^{n} c_i y_{i1} \mod q,$$ + +$$y_2 = \sum_{i=1}^{n} c_i y_{i2} \mod q.$$ + +* $\{\text{0}, \text{1}\} \leftarrow \text{Vrfy}(pk, M, \sigma)$: To verify that $M$ is valid with respect to tag $\sigma$, check whether: + +$$x ?= v_1^{y_1} v_2^{y_2} \prod_{j=1}^{m} z_j^{M[j]} \mod p.$$ + +If it holds, return 1; otherwise, return 0. + +It can be verified that $M = \sum_{i=1}^n c_i M_i$ matches the aggregated tag $\sigma$ because + +$$ +\begin{align*} +v_1^{y_1} v_2^{y_2} \prod_{j=1}^{m} z_j^{M[j]} &= v_1^{\sum_{i=1}^n c_i y_{i1}} v_2^{\sum_{i=1}^n c_i y_{i1}} \prod_{j=1}^{m} z_j^{\sum_{i=1}^n c_i M_i[j]} \\ +&= \prod_{i=1}^{n} v_1^{c_i y_{i1}} \prod_{i=1}^{n} v_2^{c_i y_{i2}} \prod_{j=1}^{m} z_j^{\sum_{i=1}^{n} c_i M_i[j]} \\ +&= \prod_{i=1}^{n} (v_1^{c_i y_{i1}} v_2^{c_i y_{i2}} \prod_{j=1}^{m} z_j^{c_i M_i[j]}) \\ +&= \prod_{i=1}^{n} x_i^{c_i} = x +\end{align*} +$$ \ No newline at end of file diff --git a/samples/texts/7449877/page_6.md b/samples/texts/7449877/page_6.md new file mode 100644 index 0000000000000000000000000000000000000000..bef46679b835ad4c01e04be1a1c2993bffc0fab2 --- /dev/null +++ b/samples/texts/7449877/page_6.md @@ -0,0 +1,195 @@ +**THEOREM 2.** *Assuming DLOG problem is hard, the HLT scheme is secure according to Definition 6.* + +PROOF. Let $M_1, \dots, M_n$ be the messages adaptively selected by $\mathcal{A}$ and $\sigma_1 = (x_1, y_{11}, y_{12}), \dots, \sigma_n = (x_n, y_{n1}, y_{n2})$ be the corresponding tags generated by the challenger. Assume the adversary wins the security game with a non-negligible probability. That is, it outputs a vector of coefficients $\tilde{c} = \{c_1, \dots, c_n\}$ and a message $M' \in \mathcal{M}$, such that $M' \neq M = \sum_{i=1}^n c_i M_i$ but $1 \leftarrow \text{Vrfy}(pk, M', \sigma)$, where $\sigma \leftarrow \text{HLTAgg}(\tilde{c}, \text{\textbf{Tag}})$, and $\text{\textbf{Tag}} = (\sigma_1, \dots, \sigma_n)$. We show that if $\mathcal{A}$ wins the security game with a non-negligible probability, then we can solve the DLOG problem: given $v_1, v_2$ randomly selected from $Z_p^*$, find $\log_{v_2}(v_1)$. + +Suppose $\sigma = (x, y_1, y_2)$. Since $1 \leftarrow \text{Vrfy}(pk, M', \sigma)$, we have + +$$x = v_1^{y_1} v_2^{y_2} \prod_{j=1}^{m} z_j^{M'[j]}.$$ + +On the other hand, as $\sigma \leftarrow \text{HLTAgg}(\tilde{c}, \text{\textbf{Tag}})$, we have + +$$x = v_1^{y_1} v_2^{y_2} \prod_{j=1}^{m} z_j^{M[j]},$$ + +where $M = \sum_{i=1}^{n} c_i M_i$. Therefore, we have + +$$\prod_{j=1}^{m} z_j^{M'[j]} = \prod_{j=1}^{m} z_j^{M[j]},$$ + +namely + +$$\prod_{j=1}^{m} z_j^{M'[j] - M[j]} = 1.$$ + +As $M' \neq M$, let $\Delta M[j] = M'[j] - M[j]$ for $1 \le j \le m$. Since $z_j = v_1^{-s_{j1}} v_2^{-s_{j2}}$, we have + +$$v_1^{\sum_{j=1}^{m} -s_{j1} \Delta M[j]} v_2^{\sum_{j=1}^{m} -s_{j2} \Delta M[j]} = 1.$$ + +We claim that $\sum_{j=1}^{m} -s_{j1} \Delta M[j] \bmod q = 0$ with negligible probability because $s_{j1}$ for $1 \le j \le n$ are kept secret. Then we have + +$$v_1 = v_2^{\frac{\sum_{j=1}^{m} s_{j2} \Delta M[j]}{\sum_{j=1}^{m} -s_{j1} \Delta M[j]}},$$ + +□ + +Performance. + +As stated in Lemma 1, any secure HLA scheme is also a se- +cure HLT scheme. Now we show that HLT constructions can be +significantly more efficient than HLA schemes. Specifically, we +compare our HLT with two HLA schemes presented in [2, 26]. We +use comparable parameters that offer the same level of security. +Specifically, the parameter $q$ is 140-bit and $p$ is 512-bit in our HLT +scheme, $p$ is 160-bit in [26] and $N$ is 1024-bit in [2]. We consider +$n$ messages, namely $M_i = (M_i[1], \dots, M_i[m])$ for $1 \le i \le n$, +and compare the costs of the respective operations. + +As shown in Table 1, the HLA scheme presented in [26] has the shortest tag but incurs the most expensive computation. Recall that exponentiations and multiplications in pairing groups are much less efficient than those in integer groups (e.g., the cost of one pairing is about that of 6-20 exponentiations [4]). + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ assumption + + HLT + + HLA [26] + + HLA [2] +
+ pairing-based? + + DLOG + + CDH + + Factoring +
+ tag size + + No + + Yes + + No +
+ tagGen + + 790 bits + + 160 bits + + 1024 bits +
+ verify (single) + + mEx + + 2Pairing + mEx + + mEx +
+ verify (aggregate) + + mEx + mn Mu + + 2Pairing + (m + n)Ex + mn Mu + + (m + n)Ex + mn Mu +
+ tagAggregate + + nEx + 2n Mu + + nEx + n Mu + + nEx + n Mu +
+ +Table 1: Performance of HLT and HLA, where Ex denotes ex- +ponentiation and Mu denotes multiplication. + +5. QUERY INTEGRITY FOR OUTSOURCED DYNAMIC DATABASES: CONSTRUCTION AND ANALYSIS + +In this section, we begin with a discussion on the solution design space. Then, we present the main construction and analyze its security. Finally, we discuss its efficiency with a comparison to the state-of-the-art solutions. + +5.1 Solution Space + +As discussed in the related work section, the state-of-the-art so- +lutions to the query integrity problem fall into two approaches. The +first approach is tree-based [13]. This approach incurs the least +computational complexity because of the hash functions, but also +incurs O(n log n) communication overhead. The second approach +is signature-based [23]. This approach incurs high computational +complexity of O(kn) bilinear map exponentiations and communi- +cation complexity of O(n) bitmaps (a small constant bits). Both +approaches incur O(mn) extra storage complexity in the cloud. + +Our solution is based on a third approach. It reduces the extra +complexity at the cloud side from O(mn) to O(n). It achieves +a balanced trade-off between computational and communication +communications. Specifically, it is less efficient than the tree-based +solution in terms of computational complexity but substantially more +efficient than the tree-based solution in terms of communication +complexity. It is also substantially more efficient than the signature- +based solution in terms of computational complexity but less effi- +cient than the signature-based solution in terms of communication +complexity. Perhaps more importantly, our solution can accommo- +date aggregate queries, which are not supported by the state-of-the- +art solutions [13, 23]. + +The high-level idea of our solution is the following: The HLT +scheme generates a tag for each tuple in the table, and the AuthODS +scheme can be built on those tags, which are ordered by the search +key. Intuitively, the AuthODS scheme provides two functionaliti- +ties: one is to enable range query, and the other is to guarantee tag +integrity (i.e., preventing HLT tags from being manipulated). The +performance gain comes from the HLT scheme because only one +aggregate tuple is needed to verify the integrity of (parts of) tuples. +This is critical for the projection query because its query result only +contains a portion of attributes from all tuples. + +## **5.2 Proposed Construction** + +Let $\mathbb{R}$ be a table of $n$ tuples with schema $(A_1, \dots, A_m)$ and $r_1, \dots, r_n$ be tuples ordered by search key $A_1$. Let $\mathcal{L}$ and $\mathcal{U}$ be the lower and upper bounds of the search key $A_1$, respectively. + +Let $\Lambda_{\mathrm{RS}} = (\mathrm{KeyGen}, \mathrm{Setup}, \mathrm{QueryVrfy}, \mathrm{Update})$ be a secure AuthODS scheme and $\Lambda_{\mathrm{HLT}} = (\mathrm{KeyGen}, \mathrm{Tag}, \mathrm{Vrfy}, \mathrm{HLTAgg})$ be a secure HLT scheme. The AuthDDB scheme is described as follows: + +• KeyGen: Given the primary security parameter $\ell$, the data \ No newline at end of file diff --git a/samples/texts/7449877/page_7.md b/samples/texts/7449877/page_7.md new file mode 100644 index 0000000000000000000000000000000000000000..96e93b16736de91f09977472bf4851ba08583b6f --- /dev/null +++ b/samples/texts/7449877/page_7.md @@ -0,0 +1,77 @@ +owner obtains two secondary security parameters $l_1$ and $l_2$, and generates a pair of private and public keys (sk, pk), + +1. Compute $(\Lambda_{RS}.sk, \Lambda_{RS}.pk) \leftarrow \Lambda_{RS}.KeyGen(1^{l_1})$. + +2. Compute $(\Lambda_{HLT}.sk, \Lambda_{HLT}.pk) \leftarrow \Lambda_{HLT}.KeyGen(1^{l_2})$. + +3. $sk = \{\Lambda_{RS}.sk, \Lambda_{HLT}.sk\}$ and $pk = \{\Lambda_{RS}.pk, \Lambda_{HLT}.pk\}$. + +4. $\Lambda_{HLT}.KeyGen$ specifies the coefficient domain $C$ and the message space $\mathcal{M}$, s.t. $(r_i.A_1, \dots, r_i.A_m) \in \mathcal{M}$ for $r_i \in R$, $1 \le i \le n$. + +* **Setup:** The data owner takes as input the private key sk and a table R, and obtains State and Au as follows: + +1. Let $r_0$ and $r_{n+1}$ be two tuples added at both ends of a table R in order to facilitate range query, where $r_0.A_1 = L$ and $r_{n+1}.A_1 = U$. + +2. Compute $\sigma_i \leftarrow \Lambda_{HLT}.TagGen(\Lambda_{HLT}.sk, r_i)$ for tuple $r_i$, $0 \le i \le n+1$. + +3. Let $E_{RS}$ be the ordered data set, such that $E_{RS} = \{E_0, \dots, E_{n+1}\}$ where $E_i = (r_i.A_1, \sigma_i)$ for $0 \le i \le n+1$ and $E_{RS}$ is ordered by $A_1$. Compute ($\text{State}_{RS}, \text{Au}_{RS}, E_{RS}$) $\leftarrow \Lambda_{RS}.\text{Upd}(\Lambda_{RS}.sk, E_{RS})$. + +4. Let $\text{State} = \text{State}_{RS}$ and $\text{Au} = (\text{Au}_{RS}, E_{RS})$. R and Au will be outsourced to the server, and State will be made public. + +* **Update:** The data owner interacts with the server to update the stored table with the update information Upd. + +**Insertion:** Suppose Upd is "insert the tuple r into R where $r_s.A_1 < r.A_1 < r_{s+1}.A_1$, $0 \le s \le n$": + +1. The data owner computes $\sigma \leftarrow \Lambda_{HLT}.TagGen(\Lambda_{HLT}.sk, r)$. + +2. Let $\text{Upd}_{RS}$ be "add an element of $E = (r.A_1, \sigma)$ between $E_s$ and $E_{s+1}$". The data owner takes as input $\text{Upd}_{RS}$, $\Lambda_{RS}.sk$ and $\text{State}_{RS}$, runs protocol $\Lambda_{RS}$.Update with the server, who takes as input $\text{Upd}_{RS}$ and $\text{Au}_{RS}$. Eventually, the data owner outputs $\text{State}'_{RS}$ and the server updates $\text{Au}_{RS}$ to $\text{Au}'_{RS}$ and $E_{RS}$ to $E'_{RS}$. + +3. The data owner delivers Upd to the server, and the server updates R to R'. + +**Replacement:** Suppose Upd is "update the tuple r with r'": + +1. The data owner fetches the tag $\sigma$ for the tuple r from the server. + +2. The data owner computes $\sigma' \leftarrow \Lambda_{HLT}.TagGen(\Lambda_{HLT}.sk, r')$. + +3. Let $\text{Upd}_{RS}$ be "update the element $(r.A_1, \sigma)$ with $(r'.A_1, \sigma')$". The data owner takes as input $\text{Upd}_{RS}$, $\Lambda_{RS}.sk$ and $\text{State}_{RS}$, runs protocol $\Lambda_{RS}$.Update with the server, who takes as input $\text{Upd}_{RS}$ and $\text{Au}_{RS}$. Eventually, the data owner outputs $\text{State}'_{RS}$ and the server updates $\text{Au}_{RS}$ to $\text{Au}'_{RS}$ and $E_{RS}$ to $E'_{RS}$. + +4. The data owner delivers Upd to the server, and the server updates R to R'. + +**Deletion:** Suppose Upd is "delete the tuple r": + +1. The data owner fetches the tag $\sigma$ for the tuple r from the server. + +2. Let $\text{Upd}_{RS}$ be "delete the element $(r.A_1, \sigma)$". The data owner takes as input $\text{Upd}_{RS}$, $\Lambda_{RS}.sk$ and $\text{State}_{RS}$, runs protocol $\Lambda_{RS}$.Update with the server, who takes as input $\text{Upd}_{RS}$ and $\text{Au}_{RS}$. Eventually, the data owner out-puts $\text{State}'_{RS}$ and the server updates $\text{Au}'_{RS}$ to $E'_{RS}$. + +3. The data owner delivers Upd to the server, and the server updates R to R'. + +We present the construction of QueryVrfy protocol based on the query type. Recall that $\text{State} = \text{State}_{RS}$ and $\text{Au} = (\text{Au}_{RS}, E_{RS})$. + +### QueryVrfy on Selection Query. + +Suppose the selection query is qry = "select * from R where $A_1 \ge a$ and $A_1 \le b"$. There are two scenarios. + +* If the result Rst is not null, assume $Rst = \{r_s, ..., r_t\}$, $1 \le s \le t \le n$, where $r_{s-1}.A_1 < a \le r_s.A_1$ and $r_t.A_1 \le b < r_{t+1}.A_1$. The protocol proceeds as follows: + + 1. The server sets Rst = {$r_s, ..., r_t$}, and sends Rst to the querier. + + 2. The querier runs protocol $\Lambda_{RS}$.QueryVrfy with the server for range query qry(a, b). If the output is reject, the querier aborts; otherwise, the querier obtains range query result Rst$_{RS} = ((r_s.A_1, \sigma_s), ..., (r_t.A_1, \sigma_t))$ and Prf$_{RS}$. + + 3. The querier randomly selects a vector of coefficients $\vec{c} = (c_s, ..., c_t)$, computes $\sigma \leftarrow \Lambda_{HLT}.HLTAgg(\vec{c}, \vec{\Gamma})$ where $\vec{\Gamma} = (\sigma_s, ..., \sigma_t)$, and runs $\Lambda_{HLT}.Vrfy(\Lambda_{HLT}.pk, \sum_{i=s}^t c_i r_i, \sigma)$. If the output is 1, the querier returns (accept, Prf = (Rst$_{RS}$, Prf$_{RS}$), Rst); otherwise, the querier returns reject. + +* If the result Rst is null, there exist two tuples $r_s, r_{s+1}, 0 \le s \le n$ such that $r_s.A_1 < a < b < r_{s+1}.A_1$. The querier can verify this fact by running protocol $\Lambda_{RS}$.QueryVrfy with range query qry(a, b), which should return accept and Rst$_{RS}$ is null. + +### QueryVrfy on Projection Query. + +Suppose the projection query is qry = "select $A_1, ..., A_k$ from R" ($k \ge 1$). The protocol proceeds as follows: + +1. The server sets Rst = {$(r_i.A_1, ..., r_i.A_k), 1 \le i \le n$} and passes it to the querier. + +2. The querier runs protocol $\Lambda_{RS}$.QueryVrfy with the server on range query $(L, U)$. If the output is reject, the querier aborts; otherwise, the querier obtains the range query result Rst$_{RS} = ((r_0.A_1, \sigma_0), ..., (r_{n+1}.A_1, \sigma_{n+1}))$ and proof Prf$_{RS}$. + +3. The querier randomly selects a vector of coefficients $\vec{c} = (c_1, ..., c_n)$ and sends it to the server. + +4. The server computes $r.A_j = \sum_{i=1}^n c_i r_i.A_j$, $k+1 \le j \le m$ and sends $(r.A_{k+1}, ..., r.A_m)$ to the querier as part of Prf. + +5. The querier computes $r.A_j = \sum_{i=1}^n c_i r_i.A_j$, $1 \le j \le k$ from Rst = {$(r_i.A_1, ..., r_i.A_k), 1 \le i \le n$} and the aggregated tag $\sigma = \Lambda_{HLT}.HLTAgg(\vec{c}, \vec{\Gamma})$, where $\vec{\Gamma} = (\sigma_1, ..., \sigma_n)$. \ No newline at end of file diff --git a/samples/texts/7449877/page_8.md b/samples/texts/7449877/page_8.md new file mode 100644 index 0000000000000000000000000000000000000000..4236b980d774d160e9594a18ddaea30d2e981f65 --- /dev/null +++ b/samples/texts/7449877/page_8.md @@ -0,0 +1,47 @@ +6. The querier computes $Λ_{HLT}$.Vrfy($Λ_{HLT}.pk$, $M$, $\sigma$) where $M = (r.A_1, ..., r.A_m)$. If the output is 1, the querier returns (accept, Rst, Prf = (RstRS, PrfRS, r.Ak+1, ..., r.Am)); otherwise, the querier returns reject. + +### QueryVrfy on Join Query. + +Let $P$ be another table with schema $(B_1, ..., B_m)$ and be processed by SetUp, where $B_1$ is the search key. For convenience, suppose $P$ has $n$ tuples, and $A_2$ and $B_2$ are the respective primary key for tables $R$ and $P$. Suppose the join query is $qty = "select R.*, P.*$ from $R$, $P$ where $R.A_s = P.B_t$ ($1 ≤ s, t ≤ m$). The protocol proceeds as follows: + +1. The server sets $Rst = (R^*, P^*)$ and passes it to the querier, where $R^*$ and $P^*$ are the tuples in $R$ and $P$ such that $R.A_s = P.B_t$. + +2. The querier runs QueryVrfy on projection queries with $qty_R = "select A_2, A_s$ from $R"$ and $qty_P = "select B_2, B_t$ from $P"$, respectively. If either execution outputs 0, the querier aborts; otherwise, the querier obtains $\{(r_i.A_2, r_i.A_s, σ_i), 1 ≤ i ≤ n\}$ and $\{(p_j.B_2, p_j.B_t, σ'_j), 1 ≤ j ≤ n\}$. + +3. The querier identifies tuples satisfying $R.A_s = P.B_t$ from $\{(r_i.A_2, r_i.A_s, σ_i), 1 ≤ i ≤ n\}$ and $\{(p_j.B_2, p_j.B_t, σ'_j), 1 ≤ j ≤ n\}$. Specifically, let $α$ and $β$ be two sets of indices such that $α ⊆ \{1, ..., n\}$, $β ⊆ \{1, ..., n\}$ and $i ∈ α, j ∈ β, r_i.A_s = p_j.B_t$. Then, the querier obtains two sets of tuples $\{(r_i.A_2, r_i.A_s, σ_i), i ∈ α\}$ and $\{(p_j.B_2, p_j.B_t, σ'_j), j ∈ β\}$, where $i ∈ α, j ∈ β, r_i.A_s = p_j.B_t$. The querier verifies that the number of tuples in $R^*$ equals to the number of tuples in $\{(r_i.A_2, r_i.A_s, σ_i), i ∈ α\}$, and the number of tuples in $P^*$ equals to the number of tuples in $\{(r_j.B_2, p_j.B_t, σ'_j), j ∈ β\}$. If both are true, the querier continues; otherwise, the querier aborts. + +4. The querier randomly selects a vector of coefficients $\bar{c} = (c_1, ..., c_{|α|})$, computes $\sigma$ by aggregating tags $\{\sigma_i, i ∈ α\}$, and executes $Λ_{HLT}$.Vrfy with $\bar{c}$, $\sigma$, $R^*$ and $Λ_{HLT}.pk$. The same is executed with respect to $P^*$. If both executions output 1, the querier returns (accept, Rst, Prf = (RstRS,R, PrfRS,R, RstRS,P, RstRS,P)); otherwise, the querier returns reject. Here $(Rst_{RS,R}, Prf_{RS,R})$ are the query result and proof when executing QueryVrfy on projection query $qty_R$, and that $(Rst_{RS,P}, Prf_{RS,P})$ are the query result and proof when executing QueryVrfy on projection query $qty_P$. + +### QueryVrfy on Aggregate Query. + +Suppose the aggregate query is $qty = "select SUM(A_2)$ from $R$ where $A_1 ≥ a$ and $A_1 ≤ b"$. Suppose $1 ≤ s ≤ t ≤ n$, $r_{s-1}·A_1 < a ≤ r_s·A_1$ and $r_t·A_1 ≤ b < r_{t+1}·A_1$. The protocol proceeds as follows: + +1. The server sets $r.A_j = \sum_{i=s}^{t} r_i.A_j$ for $j = 1, ..., m$, sets $Rst = r.A_2$, and passes Rst and $(r.A_1, r.A_3, ..., r.A_m)$ to the querier. + +2. The querier runs protocol $Λ_{RS}$.QueryVrfy on range query $qty(a, b)$ with the server. If the output is reject, the querier aborts; otherwise, the querier obtains $Rst_{RS} = ((r_s.A_1, σ_s), ..., (r_t.A_1, σ_t))$ and Prf$_{RS}$ for the range query. + +3. The querier computes $\sigma ← Λ_{HLT}.HLTAgg(\bar{c}, \bar{Tag})$, where $\bar{c}$ is a vector of 1's and $\bar{Tag} = (\sigma_s, ..., \sigma_t)$. The querier computes $Λ_{HLT}$.Vrfy($Λ_{HLT}.pk$, $M$, $\sigma$), where $M = \{r.A_1, ..., r.A_m\}$. If the output is 1, the querier returns (accept, Rst, Prf = (RstRS, PrfRS, r.A1, r.A3, ..., r.Am)); otherwise, the querier returns reject. + +**REMARK 1.** In selection/projection/join query, we use randomly selected $\bar{c}$ to prevent aggregate attack. To see this, let us consider the case without using $\bar{c}$, namely $\bar{c}$ is composed of 1s. The server has $r'_i = r_i$, $s - 1 ≤ i ≤ t + 1$, and manipulates two tuples $r_e, r_{e+1}, s ≤ e ≤ t - 1$, to obtain $r'_e = (r_e.A_1, r_e.A_2 + 1, r_e.A_3, ...)$ and $r'_{e+1} = (r_e.A_1, r_e.A_2 - 1, r_e.A_3, ...)$, which makes $\sum_{i=s}^{t} r_i = \sum_{i=s}^{t} r'_i$. Hence, the server could have $Λ_{HLT}$.Vrfy output 1 with manipulated $\{r'_s, ..., r'_t\}$. + +**REMARK 2.** Note that our solution toward the aggregate query supports only SUM queries and weighted SUM queries. + +## 5.3 Security Analysis + +It is easy to check the correctness of the AuthDDB scheme. In what follows we focus on its security. + +**THEOREM 3.** *Assume *ΛRS* is a secure AuthODS scheme and *ΛHLT* is a secure HLT scheme, where the coefficient space is large enough (e.g. 1/|C| is negligible). The proposed AuthDDB scheme attains the soundness with respect to the selection, projection, join and aggregate queries.* + +The basic idea to prove the soundness is to show that if there exists a probabilistic polynomial-time adversary $\mathcal{A}$ that breaks soundness of the AuthDDB scheme, we can break either soundness of $\Lambda_{RS}$ or security of HLT. + +**PROOF.** We show our proof through a sequence of games between a challenger, who plays the role of the data owner , and adversary $\mathcal{A}$, who acts as the malicious server. + +**Game 0:** Game 0 is defined as in Definition 2, where the challenger only keeps the relevant public/private keys and the latest state information State$_k$. + +**Game 1:** Game 1 is defined as in Definition 2, where the challenger keeps the relevant public/private keys, the latest state information State$_k$ and the latest auxiliary information Au$_k$. We can see that the probability that $\mathcal{A}$ wins Game 1 is at most negligibly less than the probability that $\mathcal{A}$ wins Game 0. + +**Game 2:** Game 2 is defined as in Definition 2, where the challenger keeps the relevant public/private keys, the latest state information State$_k$, the latest auxiliary information Au$_k$, and the latest database D$_k$. We can see that the probability that $\mathcal{A}$ wins Game 2 is at most negligibly less than the probability that $\mathcal{A}$ wins Game 1. + +Let State$_k$, Au$_k$ and $D_k = R$ be the latest version of the state information, the auxiliary information and the database, where State$_k = \text{State}_{RS}$, Au$_k = (\text{Au}_{RS}, \text{E}_{RS})$, and E$_{RS} = \{(L, σ_0), (r_1.A_1, σ_1), ..., (r_n.A_1, σ_n), (U, σ_{n+1})\}$. + +**Soundness of Selection Query** Suppose the adversary $\mathcal{A}$ finds a selection query $qty$ with query result $Rst$ and proof Prf, and wins Game 2. In other words, given $qty = "select * from R$ where $A_1 ≥ a$ and $A_1 ≤ b", \mathcal{A}$ returns $Rst = \{r'_s..., r'_t\}$ and Prf = $\{Rst_{RS}, Prf_{RS}\}$, and wins Game 2, where $Rst_{RS} = \{(r'_s.A_1, σ'_s),..., (r'_t.A_1, σ'_t)\}$. Let localRst ← LocalQuery($qty$, D$_K$) and localRst$_{RS}$ ← Λ$_{RS}$.LocalQuery($qty(a,b)$, E$_{RS}$), which are produced by the challenger with stored State$_k$, Au$_k$ and $D_k = R$. \ No newline at end of file diff --git a/samples/texts/7449877/page_9.md b/samples/texts/7449877/page_9.md new file mode 100644 index 0000000000000000000000000000000000000000..d948dcaef1a3116bdf36e36f3e10974e30460b87 --- /dev/null +++ b/samples/texts/7449877/page_9.md @@ -0,0 +1,53 @@ +Since $\mathcal{A}$ wins Game 3, we have + +$$ (accept, R_{\text{RST}}, P_{\text{RST}}) \leftarrow (Q(\Lambda_{\text{RS}}.pk, \text{qry}(a, b), \text{State}_k) \leftrightarrow S(\text{Au}_{\text{RS}}, E_{\text{RS}})). $$ + +This means that given range query $\text{qry}(a, b)$, $R_{\text{RST}}$ is the query result on the ordered data set $E_{\text{RS}}$ with respect to state $State_k$. On the other hand, if localRst$_{\text{RS}} \neq R_{\text{RST}}$, there exist two different query results with respect to $\text{qry}(a, b)$, which contradicts with the soundness of $\Lambda_{\text{RS}}$. Since $\Lambda_{\text{RS}}$ is sound, we have $R_{\text{st}} = \text{localRst}_{\text{RS}}$. Therefore, we can assume localRst = {$r_s, ..., r_t$}. + +Suppose $\vec{c} = (c_s, ..., c_t)$ is the coefficient vector sent from the challenger to $\mathcal{A}$. The challenger computes $\sigma' \leftarrow \Lambda_{\text{HLT}}.\text{HLTAgg}(\vec{c}, \text{Tag}')$, where $\text{Tag}' = (\sigma'_s, ..., \sigma'_t)$. Since the adversary wins Game 2, it should satisfy: + +$$ 1 \leftarrow \Lambda_{\text{HLT}}.\text{Vrfy}(\Lambda_{\text{HLT}}.pk, \sum_{i=s}^{t} c_i r'_i, \sigma'). $$ + +If localRst $\neq$ Rst, there exist some $i, s \le i \le t$, $r_i \neq r'_i$. So, we have $\sum_{i=s}^{t} c_i r'_i = \sum_{i=s}^{t} c_i r_i$ with negligible probability because $c_1, ..., c_n$ are randomly selected from $\mathcal{C}$ and $\frac{1}{|\mathcal{C}|}$ is negligible. That is, we have another equation + +$$ 1 \leftarrow \Lambda_{\text{HLT}}.\text{Vrfy}(\Lambda_{\text{HLT}}.pk, \sum_{i=s}^{t} c_i r_i, \sigma'), $$ + +which allows us to break security of $\Lambda_{\text{HLT}}$ if localRst $\neq$ Rst. This means that if $\mathcal{A}$ breaks the soundness of AuthDDB, we can break either the soundness of $\Lambda_{\text{RS}}$ or the security of $\Lambda_{\text{HLT}}$. + +**Soundness of Projection Query** suppose the adversary $\mathcal{A}$ finds a projection query qry with query result Rst and proof Prf, and wins Game 2. In other words, given qry = "select $A_1, ..., A_k$ from $R"" ($k \ge 1$), $\mathcal{A}$ returns query result Rst = {$\langle r'_i.A_1, ..., r'_i.A_k \rangle$, $1 \le i \le n$} and Prf = ($R_{\text{RST}}, P_{\text{RST}}, r'.A_{k+1}, ..., r'.A_m$). Let localRst $\leftarrow$ LocalQuery($qry, D_k$) and localRst$_{\text{RS}} \leftarrow \Lambda_{\text{RS}}$. LocalQuery($qry(a, b), E_{\text{RS}}$), which are produced by the challenger with stored $State_k$, $Au_k$ and $D_k = R$. + +Since $\mathcal{A}$ wins Game 2, we have + +$$ (accept, R_{\text{RST}}, P_{\text{RST}}) \leftarrow (Q(\Lambda_{\text{RS}}.pk, \text{qry}(L, U), \text{State}_k) \leftrightarrow S(\text{Au}_{\text{RS}}, E_{\text{RS}})). $$ + +This means that given range query $\text{qry}(L, U)$, $R_{\text{RST}}$ is the query result on the ordered data set $E_{\text{RS}}$ with respect to state $State_k$. On the other hand, if localRst$_{\text{RS}} \neq R_{\text{RST}}$, there exist two different query results with respect to $\text{qry}(a, b)$, which contradicts with the soundness of $\Lambda_{\text{RS}}$. Since $\Lambda_{\text{RS}}$ is sound, we have $R_{\text{st}} = \text{localRst}_{\text{RS}}$. Therefore, we can assume localRst = {$r_1, ..., r_n$}. + +Suppose $\vec{c} = (c_1, ..., c_n)$ is the coefficient vector sent from the challenger to $\mathcal{A}$. The challenger computes $\sigma' \leftarrow \Lambda_{\text{HLT}}.\text{HLTAgg}(\vec{c}, \text{Tag}')$, where $\text{Tag}' = (\sigma'_s, ..., \sigma'_t)$ and $r'.A_j = \sum_{i=1}^n c_i r'_i.A_j$, $1 \le j \le m$. Since $\mathcal{A}$ wins Game 2, we have + +$$ 1 \leftarrow \Lambda_{\text{HLT}}.\text{Vrfy}(\Lambda_{\text{HLT}}.pk, (r'.A_1, ..., r'.A_m), \sigma'). $$ + +If localRst $\neq$ Rst, there exist some $i,j$, $1 \le i \le n$, $1 \le j \le k$ such that $r_i.A_j \neq r'_i.A_j$. This means that $\sum_{i=1}^n c_i r'_i.A_j = \sum_{i=1}^n c_i r_i.A_j$ with negligible probability. Therefore, we obtain another equation + +$$ 1 \leftarrow \Lambda_{\text{HLT}}.\text{Vrfy}(\Lambda_{\text{HLT}}.pk, (\sum_{i=1}^{n} c_i r_i.A_1, \dots, \sum_{i=1}^{n} c_i r_i.A_m), \sigma'), $$ + +which allows us to break the security of $\Lambda_{\text{HLT}}$ if localRst $\neq$ Rst. This means that if $\mathcal{A}$ breaks the soundness of AuthDDB, we can break either the soundness of $\Lambda_{\text{RS}}$ or the security of $\Lambda_{\text{HLT}}$. □ + +$$ (accept, R_{\text{RST}}, P_{\text{RST}}) \leftarrow (Q(\Lambda_{\text{RS}}.pk, \text{qry}(a, b), \text{State}_k) \leftrightarrow S(\text{Au}_{\text{RS}}, E_{\text{RS}})) $$ + +This means that given range query qry($a, b$), $R_{\text{RST}}$ is the query result on the ordered data set $E_{\text{RS}}$ with respect to state $State_k$. On the other hand, if localRst$_{\text{RS}} \neq R_{\text{RST}}$, there exist two different query results with respect to qry($a, b$), which contradicts with the soundness of $\Lambda_{\text{RS}}$. Since $\Lambda_{\text{RS}}$ is sound, we have $R_{\text{st}} = \text{localRst}_{\text{RS}}$, which means that any tuple $r_i$ in $\mathbb{R}$, where $s \le i \le t$, satisfies $a \le r_i.A_1 \le b$. The challenger computes $\sigma' \leftarrow \Lambda_{\text{HLT}}.\text{HLTAgg}(\vec{c}, \text{Tag}')$, where $\text{Tag}' = (\sigma'_s, ..., \sigma'_t)$ and $\vec{c}$ is all 1's. If $\mathcal{A}$ wins Game 2, it should satisfy + +$$ 1 \leftarrow \Lambda_{\text{HLT}}.\text{Vrfy}(\Lambda_{\text{HLT}}.pk, (r'.A_1, ..., r'.A_m), \sigma'). $$ + +If localRst $\neq$ Rst, we have $\sum_{i=s}^{t} r_i.A_2 \neq r'.A_2$. Therefore, we obtain another equation + +$$ 1 \leftarrow \Lambda_{\mathrm{HLT}}.\mathrm{Vrfy}(\Lambda_{\mathrm{HLT}}.pk, (\sum_{i=s}^{t} r_i.A_1, \ldots, \sum_{i=s}^{t} c_i r_i.A_m), \sigma'), $$ + +which allows us to break the security of $\Lambda_{\mathrm{HLT}}$ if localRst $\neq$ Rst. This means that if $\mathcal{A}$ breaks the soundness of AuthDDB, we can break either the soundness of $\Lambda_{\mathrm{RS}}$ or the security of $\Lambda_{\mathrm{HLT}}$. □ + +## 5.4 Performance + +We compare the asymptotic performance of our solution with that of the two state-of-the-art solutions [13, 23]. As shown in Table 2, our solution is more expressive because it additionally supports aggregate queries, such as: "select SUM($A_2$) from $R$ where $A_1 > a." Moreover, our solution allows the join query with respect to arbitrary attributes, such as: "select $R.^*$, $P.^*$ from $R$, $P$ where $R.A_3 = P.B_4" without requiring that $R.A_3$ and $P.B_4$ be search keys. Whereas, this type of join queries cannot be handled by the state-of-the-art solutions [13, 23]. + +Regarding pre-processing the database before outsourcing it to the cloud, our solution is more efficient than [23], and as efficient as [13]. In addition, our solution incurs the least extra storage complexity. To see this, we compare the three solutions with parameters in Table 1. Figure 2(a) shows that our solution is storage-space more efficient than [13, 23] as long as the number of attributes is greater than Tag/Hash, which is often the case. Moreover, from Figure 2(b) we can see that the storage-space requirement in our solution is independent of the number of attributes; in contrast, the storage-space complexity of [13, 23] increases linearly with respect to the number of attributes. + +Regarding selection queries, projection queries and join queries, our solution incurs respective computational complexity $O(\log n)$ \ No newline at end of file diff --git a/samples/texts/7597009/page_1.md b/samples/texts/7597009/page_1.md new file mode 100644 index 0000000000000000000000000000000000000000..2e7680165376d880a04adfbfd80fde239443ae30 --- /dev/null +++ b/samples/texts/7597009/page_1.md @@ -0,0 +1,20 @@ +# INDUCED MAPPINGS ON SYMMETRIC PRODUCTS + +by +GALO HIGUERA AND ALEJANDRO ILLANES + +Electronically published on December 8, 2010 + +Topology Proceedings + +**Web:** http://topology.auburn.edu/tp/ + +**Mail:** Topology Proceedings +Department of Mathematics & Statistics +Auburn University, Alabama 36849, USA + +**E-mail:** topolog@auburn.edu + +**ISSN:** 0146-4124 + +COPYRIGHT © by Topology Proceedings. All rights reserved. \ No newline at end of file diff --git a/samples/texts/7597009/page_10.md b/samples/texts/7597009/page_10.md new file mode 100644 index 0000000000000000000000000000000000000000..c65c8dfea280e96ca55fb0043bee6b550b69f1b6 --- /dev/null +++ b/samples/texts/7597009/page_10.md @@ -0,0 +1,25 @@ +Since $\{x_1, x_2, x_3\} \in \langle K, L, E \rangle_n$, $C \subset \langle K, L, E \rangle_n$. Let $V, W$ be disjoint open subsets of $Y$ such that $\{y_2\} \cup \{v_1, v_2, ...\} \subset V$ and $y_1 \in W$. Then $E \subset f^{-1}(V)$ and $K \cup L \subset f^{-1}(W)$. + +Since $g$ is open, $g$ is confluent ([30, Theorem (7.5), p. 148], definition of confluence is in Definition 3.17). Thus $g(C) = D$. Since $\{x_1, x_2\} \in \langle f^{-1}(y_1), f^{-1}(y_2)\rangle_n = g^{-1}(D)$, $g(\{x_1, x_2\}) \in D$. Hence, there exists $C \in C \subset \langle K, L, E \rangle_n$ such that $g(C) = g(\{x_1, x_2\})$. The set $C$ is of the form $C = \{p_1, \dots, p_r, p_{r+1}, \dots, p_s, p_{s+1}, \dots, p_t\}$, where $t \le n$, the points $p_1, \dots, p_t$ are pairwise different, $\{p_1, \dots, p_r\} \subset K$, $\{p_{r+1}, \dots, p_s\} \subset L$ and $\{p_{s+1}, \dots, p_t\} \subset E$. Let $\epsilon > 0$ be such that $B(\epsilon, p_1), \dots, B(\epsilon, p_t)$ are pairwise disjoint, $B(\epsilon, p_1) \cup \dots \cup B(\epsilon, p_s) \subset f^{-1}(W)$ and $B(\epsilon, p_{s+1}) \cup \dots \cup B(\epsilon, p_t) \subset f^{-1}(V)$. + +Let $\mathcal{U} = \langle B(\epsilon, p_1), \dots, B(\epsilon, p_t) \rangle_n$. Then $\mathcal{U}$ is open in $\mathcal{F}_n(X)$, $C \in \mathcal{U}$, $g(\mathcal{U})$ is open in $Z$ and $g(\{x_1, x_2\}) = g(C) \in g(\mathcal{U})$. Since $\lim_{m \to \infty} \{x_1, u_m, \dots, u_{m+n-2}\} = \{x_1, x_2\}$, we have + +$$\lim_{m \to \infty} g(\{x_1, u_m, \dots, u_{m+n-2}\}) = g(\{x_1, x_2\}) \in g(\mathcal{U}).$$ + +So, there exists $m \in \mathbb{N}$ such that $g(\{x_1, u_m, \dots, u_{m+n-2}\}) \in g(\mathcal{U})$. Thus, there exists $A \in \mathcal{U}$ such that $g(A) = g(\{x_1, u_m, \dots, u_{m+n-2}\})$. Therefore $f(A) = f_n(A) = h(g(A)) = h(g(\{x_1, u_m, \dots, u_{m+n-2}\})) = f_n(\{x_1, u_m, \dots, u_{m+n-2}\}) = \{y_1, v_m, \dots, v_{m+n-2}\}$. Let $q_1, \dots, q_{n-1} \in A$ be such that $f(q_1) = v_m, \dots, f(q_{n-1}) = v_{m+n-2}$. Notice that $\{q_1, \dots, q_{n-1}\} \subset f^{-1}(V)$. + +Notice that $p_1 \in K$ and $p_{r+1} \in L$, so there exist $a_1 \in A \cap B(\epsilon, p_1) \subset f^{-1}(W)$ and $a_2 \in A \cap B(\epsilon, p_{r+1}) \subset f^{-1}(W)$. Thus the points $a_1, a_2, q_1, \dots, q_{n-1}$ are pairwise different and they are elements of $A$. This is a contradiction since $A$ has at most $n$ points. + +Therefore $f$ is monotone. $\square$ + +**Corollary 3.15.** Suppose that $Y$ is a nondegenerate continuum. Then the following statements for a mapping $f: X \to Y$ are equivalent: + +a) $f$ is monotone, + +b) $f_n$ is monotone for some $n \in \mathbb{N}$, + +c) $f_n$ is monotone for every $n \in \mathbb{N}$, + +d) $f_n$ is MO for some $n \ge 3$. + +Question 3.16. Does Theorem 3.14 hold for $n = 2$? \ No newline at end of file diff --git a/samples/texts/7597009/page_11.md b/samples/texts/7597009/page_11.md new file mode 100644 index 0000000000000000000000000000000000000000..1b942feddc02a4cad265cadd7b1b6f6a27edd2a5 --- /dev/null +++ b/samples/texts/7597009/page_11.md @@ -0,0 +1,25 @@ +### 3.3. Confluent Mappings. + +**Definition 3.17.** A mapping $f: X \to Y$ is said to be: + +1) Confluent if for every subcontinuum $B$ of $Y$ and every component $A$ of $f^{-1}(B)$ we have that $f(A) = B$. + +2) Weakly confluent if for every subcontinuum $B$ of $Y$, there exists a component $A$ of $f^{-1}(B)$ such that $f(A) = B$. + +3) Semi-confluent if for every subcontinuum $B$ of $Y$ and every pair of components $C$ and $D$ of $f^{-1}(B)$ we have that $f(C) \subset f(D)$ or $f(D) \subset f(C)$. + +Clearly every confluent mapping is a weakly confluent and a semi-confluent mapping. + +**Example 3.18.** (compare with [18, Example 5.1]). There exist continua $X$ and $Y$ and a confluent (and thus, weakly confluent and semi-confluent) mapping $f: X \to Y$ such that $f_2: \mathcal{F}_2(X) \to \mathcal{F}_2(Y)$ is neither a confluent, weakly conluent nor semi-confluent mapping. + +*Proof.* Let $\mathbb{C}$ be the complex plane and $\mathbb{S}^1 \subset \mathbb{C}$ the unit circle centered at the origin. Let $X = \mathbb{S}^1 \cup I \cup J$, where $I$ and $J$ are two rays converging each one to one half of $\mathbb{S}^1$ as it is shown in Figure 1. Notice that the continuum $X$ is the union of two topological copies of the $\sin(\frac{1}{x})$-continuum joined by the end points of the limit segments. Let $f$ be the restriction of the complex function $z \to z^2$ to $X$. It is easy to show that $f$ is confluent. Let $Y = f(X)$. Then $f(I)$ and $f(J)$ are two rays converging to $\mathbb{S}^1$ as it is shown in Figure 1. We will construct a subcontinuum $\mathcal{K}$ of $\mathcal{F}_2(Y)$ which will be useful to deny the three definitions of confluence stated above. + +Let $\varepsilon = \frac{1}{16}$. Let $\alpha: [0, \infty) \to f(I)$ and $\beta: [0, \infty) \to f(J)$ be parametrizations of $f(I)$ and $f(J)$, respectively, by arc length. Let + +$$ \mathcal{A} = \{ \{\alpha(t), \alpha(t+\varepsilon)\} : t \in [0, \infty) \} $$ + +and + +$$ \mathcal{B} = \{ \{\beta(t), \beta(t+\varepsilon)\} : t \in [0, \infty) \}. $$ + +Notice that $\mathcal{A}$ (respectively, $\mathcal{B}$) consists of the pairs of points in $f(I)$ (respectively, $f(J)$) such that the subarc of $I$ joining them has length equal to $\varepsilon$. Since the function $t \to \{\alpha(t), \alpha(t+\varepsilon)\}$ from $[0, \infty)$ to $\mathcal{F}_2(X)$ is continuous, we have that $\mathcal{A}$ is connected. Similarly, $\mathcal{B}$ is connected. \ No newline at end of file diff --git a/samples/texts/7597009/page_12.md b/samples/texts/7597009/page_12.md new file mode 100644 index 0000000000000000000000000000000000000000..e70af1fa827c52888bf663c05d962eec55cecb3e --- /dev/null +++ b/samples/texts/7597009/page_12.md @@ -0,0 +1,37 @@ +Let + +$$ +\mathcal{D} = \{\{e^{(-\pi+t)i}, e^{(-\pi-t+\varepsilon)i}\} \in \mathcal{F}_2(\mathbb{S}^1) : t \in [0, \varepsilon]\} \cup \{\{e^{(\pi-t)i}, e^{(\pi+t-\varepsilon)i}\} \in \mathcal{F}_2(\mathbb{S}^1) : t \in [0, \varepsilon]\} \text{ and } \mathcal{C} = \{\{e^{ti}, e^{(t+\varepsilon)i}\} \in \mathcal{F}_2(\mathbb{S}^1) : t \in [-\pi, \pi - \varepsilon]\}. +$$ + +It is easy to see that $cl_{\mathcal{F}_2(Y)}(\mathcal{A}) \cup cl_{\mathcal{F}_2(Y)}(\mathcal{B}) = \mathcal{A} \cup \mathcal{B} \cup \mathcal{C} \cup \mathcal{D}$ and $cl_{\mathcal{F}_2(Y)}(\mathcal{A}) \cap cl_{\mathcal{F}_2(Y)}(\mathcal{B}) = \mathcal{C} \cup \mathcal{D}$. Thus, the set $\mathcal{K} = cl_{\mathcal{F}_2(Y)}(\mathcal{A}) \cup cl_{\mathcal{F}_2(Y)}(\mathcal{B})$ is a subcontinuum of $\mathcal{F}_2(Y)$. We use $\mathcal{K}$ to deny the three definitions related to confluence. + +Let $\mathbb{S}^+ = \{z \in \mathbb{S}^1 : \mathrm{Re}(z) \ge 0\}$ and $\mathbb{S}^- = \{z \in \mathbb{S}^1 : \mathrm{Re}(z) \le 0\}$. +Let $\alpha_0 : \mathbb{S}^1 \to \mathbb{S}^+$ be the map defined as $\alpha_0(z) = |\mathrm{Re}(z)| + i \mathrm{Im}(z)$ +and let $\beta_0 : \mathbb{S}^1 \to \mathbb{S}^-$ be the map defined as $\beta_0(z) = -|\mathrm{Re}(z)| +$ +$i \mathrm{Im}(z)$. For each $\theta \in \mathbb{R}$, let $\gamma(\theta) = \{\alpha_0(e^{i\theta}), \alpha_0(e^{i(\theta+\frac{\epsilon}{2})})\}$, $\delta(\theta) =$ +$\{\beta_0(e^{i\theta}), \beta_0(e^{i(\theta+\frac{\epsilon}{2})})\}$, $\lambda(\theta) = \{\alpha_0(e^{i\theta}), -\alpha_0(e^{i(\theta+\frac{\epsilon}{2})})\}$ and $\eta(\theta) =$ +$\{\beta_0(e^{i\theta}), -\beta_0(e^{i(\theta+\frac{\epsilon}{2})})\}$. Then the maps $\gamma$, $\delta$, $\lambda$ and $\eta$ are de- +fined from $\mathbb{R}$ to $\mathcal{F}_2(\mathbb{S}^1)$ and they have the following properties: +$\{e^{i(\frac{\pi}{2}-\frac{\epsilon}{4})}, -e^{i(\frac{\pi}{2}-\frac{\epsilon}{4})}\} \in \lambda(\mathbb{R}) \cap \eta(\mathbb{R})$, for each $\theta \in \mathbb{R}$, +$\gamma(\theta) \in \langle \mathbb{S}^+, \mathbb{S}^- \rangle$ +$\{i, -i\}_{2}$, $\delta(\theta) \in \langle \mathbb{S}^-, \mathbb{S}^- - \{i, -i\}_{2} \rangle$, $\gamma(\mathbb{R}) \cap \delta(\mathbb{R}) = \emptyset$, +$(\gamma(\mathbb{R}) \cup$ +$\delta(\mathbb{R})) \cap (\lambda(\mathbb{R}) \cup \eta(\mathbb{R})) = \emptyset$ and the sets $\gamma(\mathbb{R})$, $\delta(\mathbb{R})$ and $\lambda(\mathbb{R}) \cup$ +$\eta(\mathbb{R})$ are compact and connected. + +Since $f|_I : I \to f(I)$ is a homeomorphism and $f^{-1}(f(I)) = I$, +we have that $f_2^{-1}(A)$ is a connected subset of $F_2(I)$ that is home- +omorphic to $A$. Similarly, $f_2^{-1}(B)$ is a connected subset of $F_2(J)$ +that is homeomorphic to $B$. It is easy to show that $f_2^{-1}(K) =$ +($f_2^{-1}(A) \cup \gamma(R)) \cup (f_2^{-1}(B) \cup \delta(R)) \cup (\lambda(R) \cup \eta(R))$ and the components +of $f_2^{-1}(K)$ are the sets $f_2^{-1}(A) \cup \gamma(R)$, $f_2^{-1}(B) \cup \delta(R)$ and $\lambda(R) \cup \eta(R)$. +Since $f_2(f_2^{-1}(A) \cup \gamma(R)) \cap f_2(f_2^{-1}(B) \cup \delta(R)) \subset F_2(S^1)$ and $f_2(\lambda(R) \cup$ +$\eta(R)) \subset F_2(S^1)$, we have that $f_2(f_2^{-1}(A) \cup \gamma(R)) \subset f_2(f_2^{-1}(B) \cup$ +$\delta(R))$, $f_2(f_2^{-1}(B) \cup \delta(R)) \subset f_2(f_2^{-1}(A) \cup \gamma(R))$ and no component +$L$ of $f_2^{-1}(K)$ has the property that $f_2(L) = K$. Therefore, $f_2$ is +neither confluent, weakly confluent nor semi-confluent. $\square$ + +**Theorem 3.19.** If $f_n: F_n(X) \rightarrow F_n(Y)$ is confluent, for some $n \in N$, then $f: X \rightarrow Y$ is confluent. + +*Proof.* Let *B* be a subcontinuum of *Y* and *D* a component of *f*⁻¹(*B*). Notice that *F*₁(*B*) is a subcontinuum of *F*ₙ(*Y*). Note that *F*₁(*D*) is a connected subset of *f*⁻¹(*F*₁(*B*)). Let *C* be the component of *f*⁻¹(*F*₁(*B*)) that contains *F*₁(*D*). By Lemma 2.1, *M* = ⋃{*E*: *E* ∈ *C*} is connected. We will show that *M* = *D*. \ No newline at end of file diff --git a/samples/texts/7597009/page_13.md b/samples/texts/7597009/page_13.md new file mode 100644 index 0000000000000000000000000000000000000000..3fb5303d74ab267452573210fc14dc21d75e09ac --- /dev/null +++ b/samples/texts/7597009/page_13.md @@ -0,0 +1,9 @@ +FIGURE 1 + +Clearly, $D \subset M$. Given $x \in M$ there exists $A \in C$ such that $x \in A$. Since $A \in C \subset f_n^{-1}(\mathcal{F}_1(B))$, $f_n(A) = \{b\}$ for some $b \in B$, and in particular $f(x) = b$. We have shown that $M \subset f^{-1}(B)$. Since $M$ is connected, $D \subset M$ and $D$ is a component of $f^{-1}(B)$, we obtain that $D = M$. + +Now, we show that $f(M) = B$. Clearly, $f(M) \subset B$. Given $b \in B, \{b\} \in \mathcal{F}_1(B)$. Since $f_n$ is confluent and $C$ is a component of $f_n^{-1}(\mathcal{F}_1(B))$, $f_n(C) = \mathcal{F}_1(B)$. Then there exists $A \in C$ such that $f_n(A) = \{b\}$. Fix $x \in A$, then $f(x) = b$ and $x \in M$. Thus $f(x) \in f(M)$ and $b \in f(M)$. This shows that $B \subset f(M)$, then $f(M) = B$. Hence $f(D) = B$. Therefore *f* is confluent. □ + +**Theorem 3.20.** If $f_n: \mathcal{F}_n(X) \to \mathcal{F}_n(Y)$ is weakly confluent, for some $n \in \mathbb{N}$, then *f* is weakly confluent. + +*Proof.* Let $B$ be a subcontinuum of $Y$. Since $f_n$ is weakly confluent there exists a component $\mathcal{D}$ of $f_n^{-1}(\mathcal{F}_1(B))$ such that $f_n(\mathcal{D}) = \mathcal{F}_1(B)$. \ No newline at end of file diff --git a/samples/texts/7597009/page_14.md b/samples/texts/7597009/page_14.md new file mode 100644 index 0000000000000000000000000000000000000000..5feecd922c91c1a389251b3030317df1b48471aa --- /dev/null +++ b/samples/texts/7597009/page_14.md @@ -0,0 +1,19 @@ +Let $G = \bigcup\{E : E \in \mathcal{D}\}$. Clearly, $G \subset f^{-1}(B)$. Choose a component $C$ of $G$. Let $D$ be the component of $f^{-1}(B)$ that contains $C$. Given $y \in B$, since $f_n(\mathcal{D}) = \mathcal{F}_1(B)$, there exists $E \in \mathcal{D}$ such that $f_n(E) = \{y\}$. Since $\mathcal{D}$ is closed and connected, by Lemma 2.1, $E$ intersects every component of $G$. So we can pick $z \in C \cap E$. Since $z \in E$, $f(z) = y$. In addition, since $z \in C \subset D$, $f(z) = y \in f(D)$. We have shown that $B \subset f(D)$ and then $f(D) = B$. This ends the proof that $f$ is weakly confluent. □ + +**Theorem 3.21.** If $f_n: \mathcal{F}_n(X) \to \mathcal{F}_n(Y)$ is semi-confluent, for some $n \in \mathbb{N}$, then $f$ is semi-confluent. + +*Proof.* Let $B$ be a subcontinuum of $Y$ and let $C$ and $D$ be components of $f^{-1}(B)$. Notice that $\mathcal{F}_1(B)$, $\mathcal{F}_1(C)$ and $\mathcal{F}_1(D)$ are connected, $\mathcal{F}_1(C) \subset f_n^{-1}(\mathcal{F}_1(B))$ and $\mathcal{F}_1(D) \subset f_n^{-1}(\mathcal{F}_1(B))$. + +Take the components $\mathcal{C}$ and $\mathcal{D}$ of $f_n^{-1}(\mathcal{F}_1(B))$ that contain $\mathcal{F}_1(C)$ and $\mathcal{F}_1(D)$, respectively. Let $M = \bigcup\{E : E \in \mathcal{C}\}$ and $N = \bigcup\{E : E \in \mathcal{D}\}$. Proceeding as in the proof of Theorem 3.19, it follows that $M = C$ and that $N = D$. + +Since $f_n$ is semi-confluent we may assume that $f_n(C) \subset f_n(\mathcal{D})$. We now show that $f(C) \subset f(D)$. Let $y \in f(C)$. Let $c \in C$ be such that $f(c) = y$. Then $\{c\} \in \mathcal{C}$ and therefore $\{y\} \in f_n(C) \subset f_n(\mathcal{D})$. Let $A \in \mathcal{D}$ be such that $f_n(A) = \{y\}$. Taking $x \in A$, we have that $x \in N = D$ and $f(x) = y$. Hence $y = f(x) \in f(D)$. Thus $f(C) \subset f(D)$. Therefore $f$ is semi-confluent. □ + +### 3.4. Light Mappings. + +**Definition 3.22.** A topological space $X$ is said to be totally disconnected if every component of $X$ is degenerate. + +It is well known that in compact metric spaces the components and quasi-components coincide. The next corollaries follow directly from this fact. + +**Corollary 3.23.** If $X$ is a metric compact space, then $X$ is totally disconnected if and only if for every pair of different points $x$ and $y$ there exists an open and closed subset $U$ of $X$ such that $x \in U$ and $y \notin U$. + +**Corollary 3.24.** If $X$ is a metric compact space, then $X$ is totally disconnected if and only if for every finite subset $A$ of $X$ and every point $y \in X \setminus A$ there exists an open and closed subset $U$ of $X$ such that $A \subset U$ and $y \notin U$. \ No newline at end of file diff --git a/samples/texts/7597009/page_15.md b/samples/texts/7597009/page_15.md new file mode 100644 index 0000000000000000000000000000000000000000..b771fd6979885975b15b09ebe7a28f78fc478aa0 --- /dev/null +++ b/samples/texts/7597009/page_15.md @@ -0,0 +1,23 @@ +**Definition 3.25.** A mapping between continua $f: X \to Y$ is light if $f^{-1}(y)$ is totally disconnected for every $y$ in $Y$. + +**Theorem 3.26.** The following three statements for a mapping $f: X \to Y$ are equivalent: + +a) $f$ is light, + +b) $f_n$ is light for some $n \in \mathbb{N}$, + +c) $f_n$ is light for every $n \in \mathbb{N}$. + +*Proof.* b) ⇒ a). Suppose that $f_n$ is light for some $n \in \mathbb{N}$. Given $y \in Y$, $f_n^{-1}(\{y\})$ is totally disconnected. Notice that, $x \in f^{-1}(y)$ if and only if $\{x\} \in f_n^{-1}(\{y\})$. Thus $f^{-1}(y)$ is homeomorphic to $f_n^{-1}(\{y\}) \cap \mathcal{F}_1(X)$. Therefore, $f^{-1}(y)$ is totally disconnected and $f$ is light. + +a) ⇒ c). Let $n \in \mathbb{N}$. Suppose that $f$ is light. Let $B = \{y_1, \dots, y_k\} \in \mathcal{F}_n(Y)$. Then $f^{-1}(y_i)$ is totally disconnected for each $i \in \{1, \dots, k\}$. + +It is easy to check that $f_n^{-1}(B) = \langle f^{-1}(y_1), \dots, f^{-1}(y_k) \rangle_n$. + +Let $A_1 = \{x_1, \dots, x_m\}$ and $A_2 = \{u_1, \dots, u_s\}$ be two different elements of $f_n^{-1}(B)$. We may assume that $x_1 \notin A_2$ and that $x_1 \in f^{-1}(y_1)$. By hypothesis $f^{-1}(y_1)$ is totally disconnected. Since $A_2 \cap f^{-1}(y_1)$ is finite, by Corollary 3.24, there exists an open and closed subset $K$ of $f^{-1}(y_1)$ such that $x_1 \in K$ and $(A_2 \cap f^{-1}(y_1)) \cap K = \emptyset$. Let $L = f^{-1}(y_1) \setminus K$. Then $K$ and $L$ are closed in $X$, $K \cap L = \emptyset$, $f^{-1}(y_1) = K \cup L$, $x_1 \in K$ and $A_2 \cap f^{-1}(y_1) \subset L$. + +Let $\mathcal{K} = \{A \in f_n^{-1}(B) : A \cap K \neq \emptyset\} = f_n^{-1}(B) \cap \langle K, X \rangle_n$ and let $\mathcal{L} = \langle L, f^{-1}(y_2), \dots, f^{-1}(y_k) \rangle_n$. Clearly $\mathcal{K}$ and $\mathcal{L}$ are closed subsets of $f_n^{-1}(B)$, $A_1 \in \mathcal{K}$ and $A_2 \in \mathcal{L}$. Given $A \in f_n^{-1}(B)$, if $A \cap K \neq \emptyset$, then $A \in \mathcal{K}$, and if $A \cap K = \emptyset$, since $f_n^{-1}(B) = \langle f^{-1}(y_1), \dots, f^{-1}(y_k) \rangle_n$, then $\emptyset \neq A \cap f^{-1}(y_1) \subset L$, so $A \in \mathcal{L}$. This shows that $f_n^{-1}(B) = \mathcal{K} \cup \mathcal{L}$. + +If there is an element $A \in \mathcal{K} \cap \mathcal{L}$, then there exists $x \in A \cap K$ and $A \subset L \cup f^{-1}(y_2) \cup \dots \cup f^{-1}(y_k)$. Since $K \subset f^{-1}(y_1)$, $x \in A \cap f^{-1}(y_1) \subset L$ and then $x \in K \cap L$, a contradiction. Therefore $\mathcal{K} \cap \mathcal{L} = \emptyset$. + +This shows that $\mathcal{K}$ is an open and closed (relative to $f_n^{-1}(B)$) subset of $f_n^{-1}(B)$, $A_1 \in \mathcal{K}$ and $A_2 \notin \mathcal{K}$. Hence $f_n^{-1}(B)$ is totally disconnected. Therefore $f_n$ is light. □ \ No newline at end of file diff --git a/samples/texts/7597009/page_16.md b/samples/texts/7597009/page_16.md new file mode 100644 index 0000000000000000000000000000000000000000..2bf78bb9f0bbbc7f22b93ac1629af74f64a926c5 --- /dev/null +++ b/samples/texts/7597009/page_16.md @@ -0,0 +1,25 @@ +**3.5. Universal Mappings.** + +**Definition 3.27.** A mapping $f: X \to Y$ is universal if for every continuous function $g: X \to Y$ there exists $p \in X$ such that $f(p) = g(p)$. + +In this definition, a particular interesting case is when $f$ is the identity. Notice that the identity id: $X \to X$ is universal if and only if for every continuous function $g: X \to X$ there exists $p \in X$ such that $g(p) = p$. In other words, the identity is universal if and only if $X$ has the fixed point property. + +Then, a first step to see if the universality of a mapping is preserved by the induced functions is to see if this happens when this mapping is the simplest of all, the identity. + +**Example 3.28** (J. Oledzki, [28]). There exists a continuum $X$ with the fixed point property such that the hyperspace $\mathcal{F}_2(X)$ does not have the fixed point property. + +Example 3.28 implies the following. + +**Example 3.29** ([28]). There exists a continuum $X$ and the mapping id: $X \to X$ such that id is universal, but id₂: $\mathcal{F}_2(X) \to \mathcal{F}_2(X)$ is not universal. + +The other implication does not hold either. The authors have given an example in [15]. + +**Example 3.30** ([15]). There exists a continuum $X$ such that $\mathcal{F}_2(X)$ has the fixed point property but $X$ does not have the fixed point property. + +Thus, we obtain the following. + +**Example 3.31.** There exists a continuum $X$ and the mapping id: $X \to X$ such that id₂: $\mathcal{F}_2(X) \to \mathcal{F}_2(X)$ is universal but id is not universal. + +**3.6. Atomic Mappings.** + +**Definition 3.32.** A mapping between continua $f: X \to Y$ is atomic if for every subcontinuum $K$ of $X$ we have that $f(K)$ is degenerate or $f^{-1}(f(K)) = K$. \ No newline at end of file diff --git a/samples/texts/7597009/page_17.md b/samples/texts/7597009/page_17.md new file mode 100644 index 0000000000000000000000000000000000000000..618cc44e826321a3c77dc382381f0a549e928356 --- /dev/null +++ b/samples/texts/7597009/page_17.md @@ -0,0 +1,17 @@ +**Theorem 3.33.** If $f_n: \mathcal{F}_n(X) \to \mathcal{F}_n(Y)$ is atomic for some $n \ge 2$ and $Y$ is nondegenerate then $f$ is a homeomorphism. + +*Proof.* Let $d$ be a metric for $Y$. It suffices to show that $f$ is one-to-one. Suppose on the contrary that there exist $x_1 \neq x_2$ in $X$ such that $f(x_1) = f(x_2)$. Since $Y$ is nondegenerate and $f$ is surjective there exists $x_3 \in X$ such that $f(x_1) \neq f(x_3)$. Let $d_0 = d(f(x_1), f(x_3))$. By Theorem 2.4 there exists an order arc $\alpha$ from $\{x_3\}$ to $X$. Let $F: \mathcal{C}(X) \to \mathcal{C}(Y)$ be the induced mapping by $f$. Then $F \circ \alpha: [0, 1] \to \mathcal{C}(Y)$ is a path from $\{f(x_3)\}$ to $Y$. + +Consider the function $g: C(Y) \to [0, \infty)$ given by $g(K) = d(f(x_1), K) = \min\{d(f(x_1), y) : y \in K\}$. Clearly $g$ is continuous. So the function $\beta = g \circ F \circ \alpha: [0, 1] \to [0, \infty)$ is continuous. Thus $\beta([0, 1])$ is a connected set. We have that $\beta(1) = d(f(x_1), Y) = 0$ and that $\beta(0) = d(f(x_1), \{f(x_3)\}) = d_0$. Since $\beta([0, 1])$ is connected, we can choose a number $t \in \beta^{-1}(\frac{d_0}{2})$. + +Notice that $F(\alpha(t))$ is a subcontinuum of $Y$ containing $f(x_3)$ and with distance to $f(x_1)$ less than $d_0$, so $F(\alpha(t))$ is nondegenerate. And the distance between $F(\alpha(t))$ and $f(x_1)$ is greater than zero, so $f(x_1) \notin F(\alpha(t))$. Therefore $A = \alpha(t)$ is a subcontinuum of $X$ that does not intersect $f^{-1}(f(x_1))$ and $f(A) = F(\alpha(t))$ is nondegenerate. + +Consider $\mathcal{K} = \{\{x_1, v\} : v \in A\}$, then $\mathcal{K}$ is homeomorphic to $A$ and, hence, is a subcontinuum of $\mathcal{F}_n(X)$. Thus $f_n(\mathcal{K}) = \{\{f(x_1), w\} : w \in f(A)\}$ is nondegenerate. Also, $f_n(\{x_2, x_3\}) = \{f(x_2), f(x_3)\} = \{f(x_1), f(x_3)\} \in f_n(\mathcal{K})$ and $\{x_2, x_3\} \notin \mathcal{K}$, so $f_n^{-1}(f_n(\mathcal{K})) \neq \mathcal{K}$. Therefore $f_n$ is not an atomic mapping. $\square$ + +## 3.7. Linking Mappings. + +**Definition 3.34.** A mapping between continua $f: X \to Y$ is linking if for every subcontinuum $B$ of $Y$ and every pair of components $C$ and $D$ of $f^{-1}(B)$ we have that $f(D) \cap f(C) \neq \emptyset$. + +**Theorem 3.35.** If $f_n: \mathcal{F}_n(X) \to \mathcal{F}_n(Y)$ is a linking mapping for some $n \in \mathbb{N}$, then $f: X \to Y$ is a linking mapping. + +*Proof.* Let $B$ be a subcontinuum of $Y$ and let $C$ and $D$ be two components of $f^{-1}(B)$. Let $\mathcal{C}$ and $\mathcal{D}$ be the components of $f_n^{-1}(\mathcal{F}_1(B))$ that contain $\mathcal{F}_1(C)$ and $\mathcal{F}_1(D)$, respectively. Let $M = \bigcup \{E : E \in \mathcal{C}\}$ and $N = \bigcup \{E : E \in \mathcal{D}\}$. Proceeding as in the proof of Theorem 3.19, it follows that $C = M$ and $D = N$. \ No newline at end of file diff --git a/samples/texts/7597009/page_18.md b/samples/texts/7597009/page_18.md new file mode 100644 index 0000000000000000000000000000000000000000..f41159f0f9c351ca2a3b7284c375758b49e784ed --- /dev/null +++ b/samples/texts/7597009/page_18.md @@ -0,0 +1,32 @@ +Since $f_n$ is linking, we have that $f_n(C) \cap f_n(D) \neq \emptyset$. Let $A_1 \in C$ and $A_2 \in D$ be such that $f_n(A_1) = f_n(A_2)$. Let $a_1 \in A_1$ and $a_2 \in A_2$ be such that $f(a_1) = f(a_2)$. Since $a_1 \in A_1 \in C$, we have that $a_1 \in M = C$. Analogously, $a_2 \in D$ and then $f(D) \cap f(C) \neq \emptyset$. Therefore $f$ is linking. $\square$ + +**Example 3.36.** There exist a continuum X and a mapping $f: X \rightarrow X$ such that $f$ is linking and its induced mapping $f_2: \mathcal{F}_2(X) \rightarrow \mathcal{F}_2(X)$ is not linking. + +*Proof*. Let $X = [0, 1]$ and $f: [0, 1] \rightarrow [0, 1]$ be given by: + +$$f(x) = \begin{cases} \frac{1}{2} + 2x, & \text{if } x \in [0, \frac{1}{4}], \\ \frac{3}{2} - 2x, & \text{if } x \in [\frac{1}{4}, \frac{3}{4}], \\ 2x - \frac{3}{2}, & \text{if } x \in [\frac{3}{4}, 1]. \end{cases}$$ + +Clearly $f$ is a linking mapping. + +To see that $f_2$ is not linking, consider the following set: $\mathcal{A} = \{{{\color{red}{\frac{3}{4}}}, y}: y \in [\frac{1}{4}, \frac{3}{4}] \} \cup \{{y, \frac{1}{4}}}: y \in [\frac{1}{4}, \frac{3}{4}] \}$. Notice that $\mathcal{A}$ is a subcontinuum of $\mathcal{F}_2([0, 1])$ and + +$$f_n^{-1}(\mathcal{A}) = +\left\{ +\begin{array}{ll} +\{ \{\frac{1}{8}, x\} : x \in [0, \frac{1}{8}] \} \cup \{ \{\frac{1}{8}, x\} : x \in [\frac{3}{8}, \frac{5}{8}] \} \cup \{ \{\frac{1}{8}, x\} : x \in [\frac{7}{8}, 1] \} & \cup \\ +\\ +\{ \{\frac{3}{8}, x\} : x \in [0, \frac{1}{8}] \} \cup \{ \{\frac{3}{8}, x\} : x \in [\frac{3}{8}, \frac{5}{8}] \} \cup \{ \{\frac{3}{8}, x\} : x \in [\frac{7}{8}, 1] \} & \cup \\ +\\ +\{ \{\frac{5}{8}, x\} : x \in [0, \frac{1}{8}] \} \cup \{ \{\frac{5}{8}, x\} : x \in [\frac{3}{8}, \frac{5}{8}] \} & \cup \{ \{\frac{5}{8}, x\} : x \in [\frac{7}{8}, 1] \} \\ +\\ +\{ \{\frac{7}{8}, x\} : x \in [0, \frac{1}{8}] \} & \cup \{ \{\frac{7}{8}, x\} : x \in [\frac{3}{8}, \frac{5}{8}] \} \\ +\end{array} +\right\}.$$ + +Also notice that the sets $\mathcal{D} = \{\{\frac{1}{8}, x\} : x \in [0, \frac{1}{8}] \}$ and $\mathcal{C} = \{\{\frac{7}{8}, x\} : x \in [\frac{7}{8}, 1]\}$ are closed, disjoint to each other and disjoint to all the other sets whose union is $f_n^{-1}(\mathcal{A})$. Hence $\mathcal{D}$ and $\mathcal{C}$ are components of $f_n^{-1}(\mathcal{A})$. Notice that $f_2(\mathcal{D}) = \{\{\frac{3}{4}, y\} : y \in [\frac{1}{2}, \frac{3}{4}] \}$ and $f_2(\mathcal{C}) = \{\{\frac{1}{4}, y\} : y \in [\frac{1}{4}, \frac{1}{2}] \}$ do not meet. It follows that $f_2$ is not a linking mapping. $\square$ + +## 3.8. Hereditary Properties. + +**Definition 3.37.** Given a property $\mathfrak{R}$ defined for mappings. We say that a mapping $f: X \to Y$ is hereditarily $\mathfrak{R}$ if for every subcontinuum $A$ of $X$ we have that $f|_A: A \to f(A)$ has property $\mathfrak{R}$. + +**Theorem 3.38.** If $f_n: F_n(X) \to F_n(Y)$ is hereditarily $\mathfrak{R}$, for some property $\mathfrak{R}$ and some $n \in N$, then $f: X \to Y$ is hereditarily $\mathfrak{R}$. \ No newline at end of file diff --git a/samples/texts/7597009/page_19.md b/samples/texts/7597009/page_19.md new file mode 100644 index 0000000000000000000000000000000000000000..61f1135f9453ebaf180ae8bb012164cadd63680d --- /dev/null +++ b/samples/texts/7597009/page_19.md @@ -0,0 +1,17 @@ +*Proof.* Notice that for every subcontinuum $A$ of $X$, the mapping $f_n|_{\mathcal{F}_1(A)}: \mathcal{F}_1(A) \to \mathcal{F}_1(f(A))$ behaves topologically identical to $f|_A: A \to f(A)$. So since $\mathcal{F}_1(A)$ is a subcontinuum of $\mathcal{F}_n(X)$ and $f_n$ is hereditarily $\mathfrak{R}$, we have that $f_n|_{\mathcal{F}_1(A)}$, and therefore $f|_A$, has the property $\mathfrak{R}$. $\square$ + +Some of the most commonly studied hereditary properties are +hereditarily monotone, hereditarily confluent, hereditarily weakly +confluent, etc. For these properties we have the following results. + +**Theorem 3.39.** If $f_n : \mathcal{F}_n(X) \to \mathcal{F}_n(Y)$ is hereditarily monotone for some $n \ge 2$ and $Y$ is nondegenerate, then $f : X \to Y$ is a homeomorphism. + +*Proof.* It suffices to show that $f$ is one-to-one. Suppose that there exist two different points $a, b \in X$ such that $f(a) = f(b) = y$. Since $Y$ is nondegenerate, there exists $v \in Y \setminus \{y\}$. Fix $x_0 \in X \setminus (f^{-1}(y) \cup f^{-1}(v))$. + +Let $\mathcal{A}=\{\{a,x\}: x \in X\}$, $\mathcal{C}=\{\{b,x\}: x \in X\}$ and $\mathcal{D} = \{\{x_0,x\}: x \in X\}$. Then $\mathcal{A}, \mathcal{C}$ and $\mathcal{D}$ are homeomorphic to $X$ and therefore they are continua. Notice that $\mathcal{A} \cap \mathcal{C} = \{a,b\}$ and $\mathcal{C} \cap \mathcal{D} = \{b,x_0\}$, so $\mathcal{G} = \mathcal{A} \cup \mathcal{C} \cup \mathcal{D}$ is a subcontinuum of $\mathcal{F}_n(X)$. + +We analyze the map $f_n|_{\mathcal{G}}: \mathcal{G} \to f_n(\mathcal{G})$. Since $f(x_0) \notin \{v,y\}$, $(f_n|_{\mathcal{G}})^{-1}(\{y,v\}) \subset \mathcal{A} \cup \mathcal{C}$, and in fact, $(f_n|_{\mathcal{G}})^{-1}(\{y,v\}) = \{\{a,x\}: x \in f^{-1}(v)\} \cup \{\{b,x\}: x \in f^{-1}(v)\}$, which are closed, disjoint and nonempty. Therefore $(f_n|_{\mathcal{G}})^{-1}(\{y,v\})$ is not connected and $f_n$ is not hereditarily monotone. This concludes the proof of the theorem. $\square$ + +**Theorem 3.40.** If $f_n : \mathcal{F}_n(X) \to \mathcal{F}_n(Y)$ is hereditarily weakly confluent for some $n \ge 3$ (and $Y$ is nondegenerate), then $f : X \to Y$ is a homeomorphism. + +*Proof.* Suppose that there exist two different points $a, b \in X$ such that $f(a) = f(b) = y_0$. Using order arcs we can construct two nondegenerate and disjoint subcontinua $K$ and $L$ of $Y$ which do not contain $y_0$. Let $e \in f^{-1}(L)$ and $c \in f^{-1}(K)$. Let $\mathcal{A} = \{\{a, b, x\} : x \in X\}$, $\mathcal{B} = \{\{a, e, x\} : x \in X\}$ and $\mathcal{C} = \{\{b, c, x\} : x \in X\}$. Then $\mathcal{A}$, $\mathcal{B}$ and $\mathcal{C}$ are subcontinua of $\mathcal{F}_n(X)$. \ No newline at end of file diff --git a/samples/texts/7597009/page_2.md b/samples/texts/7597009/page_2.md new file mode 100644 index 0000000000000000000000000000000000000000..a6e7ee3d885606e640cf902069489d57d2791b7b --- /dev/null +++ b/samples/texts/7597009/page_2.md @@ -0,0 +1,30 @@ +# INDUCED MAPPINGS ON SYMMETRIC PRODUCTS + +GALO HIGUERA AND ALEJANDRO ILLANES + +**ABSTRACT.** Let $X$ be a metric continuum. For a positive integer $n$, let $\mathcal{F}_n(X)$ be the hyperspace of nonempty subsets of $X$ with at most $n$ points. For a given mapping between continua $f: X \to Y$, we study the induced mapping $f_n: \mathcal{F}_n(X) \to \mathcal{F}_n(Y)$ given by $f_n(A) = f(A)$ (the image of $A$ under $f$). Given a topological or dynamical property $\mathcal{M}$ that mappings can have, we study under which conditions the fact that $f$ has property $\mathcal{M}$ implies that $f_n$ has property $\mathcal{M}$, and vice versa. + +## 1. INTRODUCTION + +A *continuum* is a nonempty compact connected metric space. +A continuum is said to be *nondegenerate* if it has more than one +point. Given a continuum *X*, consider the following hyperspaces of +*X*: + +$$2^X = \{\mathcal{A} \subset X : \mathcal{A} \text{ is nonempty and closed}\},$$ + +$$C(X) = \{A \in 2^X : A \text{ is connected}\}, \text{ and for each } n \ge 1,$$ + +$$\mathcal{F}_n(X) = \{A \in 2^X : A \text{ has at most } n \text{ points}\},$$ + +$$C_n(X) = \{A \in 2^X : A \text{ has at most } n \text{ components}\}.$$ + +All these hyperspaces are considered with the Hausdorff metric +$H$. In this paper the word *mapping* stands for a continuous and +surjective function. + +2010 Mathematics Subject Classification. Primary 54B20, 54H25; Secondary 54F15. + +Key words and phrases. Atomic, confluent, continuum, expansive homeomorphism, homeomorphism, hyperspace, induced mapping, light, linking, mapping, mixing, MO, monotone, open, OM, refinable, semi-confluent, solenoid, symmetric product, transitive, universal, weakly confluent, weakly mixing. + +©2010 Topology Proceedings. \ No newline at end of file diff --git a/samples/texts/7597009/page_20.md b/samples/texts/7597009/page_20.md new file mode 100644 index 0000000000000000000000000000000000000000..f41798a9058d22319936e03e4c40a5b32cd01faf --- /dev/null +++ b/samples/texts/7597009/page_20.md @@ -0,0 +1,22 @@ +Since $\{a, b, e\} \in A \cap B$ and $\{a, b, c\} \in A \cap C$, $M = A \cup B \cup C$ is a subcontinuum of $\mathcal{F}_n(X)$. Let $\mathcal{K} = \{\{y_0, f(e), w\} : w \in K\}$ and $\mathcal{L} = \{\{y_0, f(c), w\} : w \in L\}$. Since $\{y_0, f(c), f(e)\} \in \mathcal{K} \cap \mathcal{L}$, $\mathcal{R} = \mathcal{K} \cup \mathcal{L}$ is a subcontinuum of $\mathcal{F}_n(Y)$. Since $\mathcal{K} \subset f_n(\mathcal{B})$ and $\mathcal{L} \subset f_n(\mathcal{C})$, $\mathcal{R} \subset f_n(\mathcal{M})$. Since $f(e), y_0 \notin \mathcal{K}, \mathcal{K} \subset \mathcal{F}_3(X) \setminus \mathcal{F}_2(X)$. Analogously, $\mathcal{L} \subset \mathcal{F}_3(X) \setminus \mathcal{F}_2(X)$ and then $\mathcal{R} \subset \mathcal{F}_3(X) \setminus \mathcal{F}_2(X)$. + +We will show that $f_n|_{\mathcal{M}}: \mathcal{M} \to f_n(\mathcal{M})$ is not weakly confluent. +Note that $(f_n|_{\mathcal{M}})^{-1}(\mathcal{R}) = (f_n^{-1}(\mathcal{R})\cap A) \cup (f_n^{-1}(\mathcal{R})\cap B) \cup (f_n^{-1}(\mathcal{R})\cap C)$. + +Notice that, for every $x \in X$, we have that $f_n(\{a, b, x\}) = \{y_0, f(x)\} \in \mathcal{F}_2(Y)$. Hence $f_n^{-1}(\mathcal{R}) \cap A = \emptyset$ and $(f_n|_{\mathcal{M}})^{-1}(\mathcal{R}) = (f_n^{-1}(\mathcal{R}) \cap B) \cup (f_n^{-1}(\mathcal{R}) \cap C)$. + +It is easy to prove that $f_n^{-1}(\mathcal{R}) \cap B = \{\{a, e, x\} : x \in f^{-1}(K)\}$ and $f_n^{-1}(\mathcal{R}) \cap C = \{\{b, c, x\} : x \in f^{-1}(L)\}$. + +This implies that $f_n(f_n^{-1}(\mathcal{R}) \cap B) = \mathcal{K}$ and $f_n(f_n^{-1}(\mathcal{R}) \cap C) = \mathcal{L}$. + +Let $\mathcal{N}$ be a component of $(f_n|_{\mathcal{M}})^{-1}(\mathcal{R})$, notice that $f_n^{-1}(\mathcal{R}) \cap B$ and $f_n^{-1}(\mathcal{R}) \cap C$ are closed and disjoint, so $\mathcal{N} \subset f_n^{-1}(\mathcal{R}) \cap B$ or $\mathcal{N} \subset f_n^{-1}(\mathcal{R}) \cap C$. Since $K$ and $L$ are nondegenerate, $\mathcal{K} \subsetneq \mathcal{R}$ and $\mathcal{L} \subsetneq \mathcal{R}$, so $f_n(f_n^{-1}(\mathcal{R}) \cap B) \subsetneq \mathcal{R}$ and $f_n(f_n^{-1}(\mathcal{R}) \cap C) \subsetneq \mathcal{R}$. Thus $f_n(\mathcal{N}) \subsetneq \mathcal{R}$. Hence $f_n|_{\mathcal{M}}: \mathcal{M} \to f_n(\mathcal{M})$ is not weakly confluent, so $f_n$ is not hereditarily confluent. $\square$ + +**Definition 3.41.** A subcontinuum $A$ of $X$ is terminal in $X$ if for every subcontinuum $B$ of $X$, we have that $B \subset A$ or $A \subset B$ or $A \cap B = \emptyset$. + +**Theorem 3.42.** If $f_2: \mathcal{F}_2(X) \to \mathcal{F}_2(Y)$ is hereditarily confluent, then $f: X \to Y$ is monotone and $f^{-1}(y)$ is a terminal subcontinuum of $X$ for each $y \in Y$. + +*Proof.* The proof is based in the following claim. + +**Claim 1.** If there exist $y \in Y$ and a subcontinuum $A$ of $X$ such that $A \cap f^{-1}(y) \neq \emptyset$, $f^{-1}(y) \not\subseteq A$ and $A \not\subseteq f^{-1}(y)$, then $f_2: \mathcal{F}_2(X) \to \mathcal{F}_2(Y)$ is not hereditarily confluent. + +*Proof of Claim 1.* Let $a \in A \cap f^{-1}(y)$, $b \in f^{-1}(y) \setminus A$ and $c \in A \setminus f^{-1}(y)$. Notice that $f(a), f(c) \in f(A)$ and $f(a) = y \neq f(c)$, so $f(A)$ is nondegenerate. Let $\mathcal{A} = \{\{a, x\} : x \in X\}$, $\mathcal{B} = \{\{b, x\} : x \in X\}$, $\mathcal{C} = \{\{c, x\} : x \in A\}$ and $\mathcal{M} = A \cup B \cup C$. Notice that $\mathcal{A}, \mathcal{B}$ and $\mathcal{C}$ are closed, connected, $\{a, b\} \in A \cap B$ and $\{a, c\} \in A \cap C$, hence $\mathcal{M}$ is a subcontinuum of $\mathcal{F}_2(X)$. \ No newline at end of file diff --git a/samples/texts/7597009/page_21.md b/samples/texts/7597009/page_21.md new file mode 100644 index 0000000000000000000000000000000000000000..db922b75c964abed6a9d91969579bbbdf9e20f31 --- /dev/null +++ b/samples/texts/7597009/page_21.md @@ -0,0 +1,13 @@ +Let $\mathcal{K} = \{\{f(c), z\} : z \in f(A)\}$, then $\mathcal{K}$ is a nondegenerate sub-continuum of $\mathcal{F}_2(Y)$ such that $f_2(\mathcal{C}) = \mathcal{K}$, so $\mathcal{K}$ is a subcontinuum of $f_2(\mathcal{M})$. We show that $f_2|_{\mathcal{M}}: \mathcal{M} \to f_2(\mathcal{M})$ is not confluent. + +Since $(f_2|_{\mathcal{M}})^{-1}(\mathcal{K}) = (f_2^{-1}(\mathcal{K}) \cap \mathcal{A}) \cup (f_2^{-1}(\mathcal{K}) \cap \mathcal{B}) \cup (f_2^{-1}(\mathcal{K}) \cap \mathcal{C}) = \{\{a,x\} : x \in f^{-1}(f(c))\} \cup \{\{b,x\} : x \in f^{-1}(f(c))\} \cup \mathcal{C}$ and that the sets $\{\{b,x\} : x \in f^{-1}(f(c))\}$ and $\{\{a,x\} : x \in f^{-1}(f(c))\} \cup \mathcal{C}$ are closed, disjoint and nonempty, there exists a component $\mathcal{D}$ of $(f_2|_{\mathcal{M}})^{-1}(\mathcal{K})$ that is contained in $\{\{b,x\} : x \in f^{-1}(f(c))\}$. But then $f_2(\mathcal{D}) \subset f_2(\{\{b,x\} : x \in f^{-1}(f(c))\}) = \{\{y,f(c)\}\} \not\subseteq \mathcal{K}$. It follows that $f_2|_{\mathcal{M}}: \mathcal{M} \to f_2(\mathcal{M})$ is not confluent and therefore $f_2$ is not hereditarily confluent. This finishes the proof of Claim 1. + +Suppose that there exists $y \in Y$ such that $f^{-1}(y)$ is not connected. Then there exist two closed, disjoint and nonempty subsets $K$ and $L$ of $X$ such that $f^{-1}(y) = K \cup L$. Let $\mathcal{C}$ be a component of $f^{-1}(y)$ contained in $K$. By Theorem 2.4 there exists an order $\alpha: [0,1] \to \mathcal{C}(X)$ from $\mathcal{C}$ to $X$. Since $\alpha(0) = C \subset K$, $\alpha(0) \subset X \setminus L$, there exists $s > 0$ such that $\alpha(s) \subset X \setminus L$ and $\alpha(s) \neq \alpha(0)$. Let $A = \alpha(s)$. Then $A$ is a subcontinuum of $X$ such that $\emptyset \neq L \subset f^{-1}(y) \setminus A$, $\emptyset \neq C \subset f^{-1}(y) \cap A$. If $A \subset f^{-1}(y)$, then $A$ must be contained in a component of $f^{-1}(y)$, so $C = A$. This is absurd, so $A \not\subseteq f^{-1}(y)$. We have obtained a subcontinuum $\mathcal{A}$ of $X$ such that $f^{-1}(y) \cap A \neq \emptyset$, $f^{-1}(y) \not\subseteq A$ and $A \not\subseteq f^{-1}(y)$. It follows from Claim 1 that $f_2$ is not hereditarily confluent; that is a contradiction. This shows that $f^{-1}(y)$ must be connected for each $y \in Y$ and therefore $f$ is monotone. + +Finally, Claim 1 also implies that $f^{-1}(y)$ is a terminal subcontinuum of $X$ for every $y \in Y$. $\square$ + +**Definition 3.43.** A continuum $K$ is decomposable if it is the union of two proper subcontinua. + +**Theorem 3.44.** If $f_2: \mathcal{F}_2(X) \to \mathcal{F}_2(Y)$ is hereditarily confluent then for every decomposable subcontinuum $K$ of $Y$ and for every $y \in Y \setminus K$ the set $f^{-1}(y)$ is degenerate. + +*Proof.* Let $K \subset Y$ be a decomposable subcontinuum. Let $A$ and $B$ be proper subcontinua of $K$ such that $K = A \cup B$ and let $y \in Y \setminus K$. By Theorem 3.42, $f^{-1}(y)$, $f^{-1}(A)$ and $f^{-1}(B)$ are subcontinua of $X$. \ No newline at end of file diff --git a/samples/texts/7597009/page_22.md b/samples/texts/7597009/page_22.md new file mode 100644 index 0000000000000000000000000000000000000000..15f4d36776bec86654c14b95f16cf3a526085861 --- /dev/null +++ b/samples/texts/7597009/page_22.md @@ -0,0 +1,26 @@ +Suppose that $f^{-1}(y)$ contains two points $x_1 \neq x_2$. Let $b \in f^{-1}(B) \setminus f^{-1}(A)$. Let $\mathcal{A} = \{\{x_1, x\} : x \in f^{-1}(K)\}, \mathcal{B} = \{\{b, x\} : x \in f^{-1}(y)\}, \mathcal{C} = \{\{x_2, x\} : x \in f^{-1}(B)\}$ and $\mathcal{M} = \mathcal{A} \cup \mathcal{B} \cup \mathcal{C}$. Since $\{x_1, b\} \in \mathcal{A} \cap \mathcal{B}$ and $\{b, x_2\} \in \mathcal{B} \cap \mathcal{C}$, we have $\mathcal{M}$ is a subcontinuum of $\mathcal{F}_2(X)$. + +We will show that $f_2|_{\mathcal{M}}: \mathcal{M} \to f_2(\mathcal{M})$ is not confluent. Let $\mathcal{K} = \{\{y, a\} : a \in A\}$. Notice that $\mathcal{K} \subset f_2(\mathcal{A})$, so $\mathcal{K}$ is a subcontinuum of $f_2(\mathcal{M})$. Since $b \notin f^{-1}(A) \cup f^{-1}(y)$, $f_2^{-1}(\mathcal{K}) \cap B = \emptyset$. Then $(f_2|_{\mathcal{M}})^{-1}(\mathcal{K}) = (f_2^{-1}(\mathcal{K})\cap A) \cup (f_2^{-1}(\mathcal{K})\cap C)$, and the sets $f_2^{-1}(\mathcal{K})\cap A$ and $f_2^{-1}(\mathcal{K})\cap C$ are closed and disjoint. Clearly $f_2^{-1}(\mathcal{K})\cap A \neq \emptyset$ and since K is a continuum, $A \cap B \neq \emptyset$. This implies that $f_2^{-1}(\mathcal{K})\cap C \neq \emptyset$. Then there exists a component $\mathcal{D}$ of $(f_2|_{\mathcal{M}})^{-1}(\mathcal{K})$ such that $\mathcal{D} \subset f_2^{-1}(\mathcal{K}) \cap C$. Thus + +$$ +\begin{aligned} +f_2(\mathcal{D}) &\subset f_2(f_2^{-1}(\mathcal{K}) \cap \mathcal{C}) = f_2(\mathcal{C}) \cap \mathcal{K} = \{\{y, x\} : x \in B\} \cap \mathcal{K} \\ +&= \{\{y, x\} : x \in A \cap B\} \subsetneq \mathcal{K} +\end{aligned} +$$ + +and therefore $f_2|_{\mathcal{M}}: \mathcal{M} \to f_2(\mathcal{M})$ is not confluent. This contradiction proves that $f^{-1}(y)$ is a one-point set. $\square$ + +*Problem 3.45.* Suppose that $f_2: \mathcal{F}_2(X) \to \mathcal{F}_2(Y)$ is hereditarily confluent (weakly confluent) and $Y$ is nondegenerate, must $f$ be a homeomorphism? + +### 3.9. Refinable Mappings. + +**Definition 3.46.** Let $X$ and $Y$ be continua and let $d$ be a metric for $Y$. Given continuous functions $f, g: X \to Y$, we define $\text{dsup}(f, g) = \max\{d(f(x), g(x)) : x \in X\}$. + +The following lemma is immediate. + +**Lemma 3.47.** Let $f, g: X \to Y$ be a pair of continuous functions between continua, then for every $n \in \mathbb{N}$, $\text{dsup}(f,g) < \varepsilon$ if and only if $\text{dsup}(f_n, g_n) < \varepsilon$. + +**Definition 3.48.** A mapping $f: X \to Y$ is refinable if for every $\varepsilon > 0$, there exists a mapping $g: X \to Y$ such that $\text{dsup}(f,g) < \varepsilon$ and $\text{diam}(g^{-1}(y)) < \varepsilon$ for every $y \in Y$. + +**Theorem 3.49.** If $f: X \to Y$ is refinable, then $f_n: \mathcal{F}_n(X) \to \mathcal{F}_n(Y)$ is refinable for every $n \in \mathbb{N}$. \ No newline at end of file diff --git a/samples/texts/7597009/page_23.md b/samples/texts/7597009/page_23.md new file mode 100644 index 0000000000000000000000000000000000000000..b09d56312d5669bd187c610b25de573556d2aa88 --- /dev/null +++ b/samples/texts/7597009/page_23.md @@ -0,0 +1,23 @@ +*Proof.* Let $\varepsilon > 0$. Since $f$ is refinable, there exists a mapping $g: X \to Y$ such that dsup($f$, $g$) < $\varepsilon$ and diam($g^{-1}$(y)) < $\varepsilon$ for every $y \in Y$. By Lemma 3.47, dsup($f_n$, $g_n$) < $\varepsilon$. Let $A = \{y_1, \dots, y_m\} \in \mathcal{F}_n(Y)$ and take $B, C \in g_n^{-1}(A)$. Let $c \in C$ and $i \in \{1, \dots, m\}$ be such that $g(c) = y_i$. Since $g_n(B) = A$, there exists $b \in B$ such that $g(b) = y_i$. Since diam($g^{-1}$(y$_i$)) < $\varepsilon$, $d(c, b) < \varepsilon$. Therefore $C \subset N_\varepsilon(B)$, analogously $B \subset N_\varepsilon(C)$, then $H(B, C) < \varepsilon$. This shows that $\text{diam}(g_n^{-1}(A)) < \varepsilon$. Therefore, $f_n$ is refinable. $\square$ + +*Problem 3.50.* Is it true that if $f_n: \mathcal{F}_n(X) \to \mathcal{F}_n(Y)$ is refinable for some $n \ge 2$, then $f: X \to Y$ is refinable? + +#### 4. DYNAMICAL PROPERTIES + +In this section all the functions are of the form $f: X \to X$ and they are continuous but not necessarily surjective, also, $X$ always denotes a continuum. For each $k \in \{0, 1, \dots\}$ we define, inductively, $f^k$ as $f^0 = id_X$ and $f^k = f \circ f^{k-1}$. + +##### 4.1. Transitivity, Mixing properties and Chaos. + +**Definition 4.1.** A continuous function $f: X \to X$ is: + +(a) transitive if for every pair of nonempty open subsets $U$ and $V$ of $X$, there exists $k \in \mathbb{N}$ such that $f^k(U) \cap V \neq \emptyset$; + +(b) mixing if for every pair of nonempty open subsets $U$ and $V$ of $X$ there exists $N \in \mathbb{N}$ such that $f^k(U) \cap V \neq \emptyset$ for every $k \ge N$; + +(c) weakly mixing if for all nonempty open subsets $U_1, U_2, V_1$ and $V_2$ of $X$ there exists $k \in \mathbb{N}$ such that $f^k(U_i) \cap V_i \neq \emptyset$ for each $i \in \{1, 2\}$. + +**Lemma 4.2.** The family $\{\langle U_1, \dots, U_n \rangle_n \subset \mathcal{F}_n(X) : U_1, \dots, U_n \text{ are open in } X\}$ is a basis for the topology of $\mathcal{F}_n(X)$. + +*Proof.* We know that the family $\mathcal{B} = \{\langle U_1, \dots, U_m \rangle_n \subset \mathcal{F}_n(X) : m \in \mathbb{N} \text{ and } U_1, \dots, U_m \text{ are open in } X\}$ is a basis for $\mathcal{F}_n(X)$. Then it is enough to show that for every $\langle U_1, \dots, U_m \rangle_n$ in that family, and $A \in \langle U_1, \dots, U_m \rangle_n$, there exist $n$ open subsets of $X$, say $V_1, \dots, V_n$, such that $A \in \langle V_1, \dots, V_n \rangle_n \subset \langle U_1, \dots, U_m \rangle_n$. + +**Case 1.** $m \le n$. In this case, we take $V_i = U_i$ for each $i \in \{1, \dots, m\}$ and $V_i = U_m$ for each $i \in \{m+1, \dots, n\}$. Then $\langle V_1, \dots, V_n \rangle_n = \langle U_1, \dots, U_m \rangle_n$ and we are done. \ No newline at end of file diff --git a/samples/texts/7597009/page_24.md b/samples/texts/7597009/page_24.md new file mode 100644 index 0000000000000000000000000000000000000000..269673616495f314bdd189169a91232a22ee9c16 --- /dev/null +++ b/samples/texts/7597009/page_24.md @@ -0,0 +1,24 @@ +**Case 2.** $m > n$. Let $A = \{x_1, \dots, x_k\}$. For each $i \in \{1, \dots, k\}$, let $W_i = \bigcap \{U_j : x_i \in U_j\}$. It is easy to prove that $A \in \langle W_1, \dots, W_k \rangle_n \subset \langle U_1, \dots, U_m \rangle_n$. Clearly $x_i \in W_i$, for each $i \in \{1, \dots, k\}$, so $A \in \langle W_1, \dots, W_k \rangle_n$. + +Since $A \in \mathcal{F}_n(X)$, we know that $k \le n$ and then it follows from Case 1 that there exist open subsets $V_1, \dots, V_n$ of X such that $A \in \langle V_1, \dots, V_n \rangle_n \subset \langle W_1, \dots, W_k \rangle_n \subset \langle U_1, \dots, U_m \rangle_n$. This concludes the proof of the lemma. $\square$ + +**Theorem 4.3.** The following statements for a continuous function $f: X \to X$ are equivalent: + +a) *f is mixing,* + + + +b) $f_n$ is mixing for some $n \in \mathbb{N}$, + +c) $f_n$ is mixing for every $n \in \mathbb{N}$. + +*Proof.* a) ⇒ c). Suppose that *f* is mixing and let *n* ∈ N. Let ⟨*U*₁, ..., *U*ₙ⟩ₙ and ⟨*V*₁, ..., *V*ₙ⟩ₙ be two basic nonempty open subsets of 𝒦ₙ(*X*) (Lemma 4.2). For each *i* ∈ {1, ..., *n*) there exists *N*ᵢ ∈ N such that *f*ᵏ(*U*ᵢ) ∩ *V*ᵢ ≠ ∅ for each *k* ≥ *N*ᵢ. Take *N* = max {*N*₁, ..., *N*ₙ}, then *f*ᵏ(*U*ᵢ) ∩ *V*ᵢ ≠ ∅ for every *k* ≥ *N* and every *i* ∈ {1, ..., *n*}. Given *k* ≥ *N* and *i* ∈ {1, ..., *n*}, we can pick a point *x*ᵢ ∈ *U*ᵢ ∩ *f*⁻¹(*V*ᵢ). Let *A*ₖ = {*x*₁, ..., *x*ₙ}. Clearly *A*ₖ ∈ ⟨*U*₁, ..., *U*ₙ⟩ₙ and (*f*ₙ)ᵏ(*A*ₖ) ∈ ⟨*V*₁, ..., *V*ₙ⟩ₙ. Hence (*f*ₙ)ᵏ(⟨*U*₁, ..., *U*ₙ⟩ₙ)∩⟨*V*₁, ..., *V*ₙ⟩ₙ ≠ ∅ for every *k* ≥ *N*. Therefore *f*ₙ is mixing. + +b) ⇒ a). Suppose that $f_n$ is mixing for some $n \in \mathbb{N}$. Let $U$ and $V$ be two nonempty open subsets of $X$. Then $\langle U \rangle_n$ and $\langle V \rangle_n$ are two nonempty open subsets of $F_n(X)$. Since $f_n$ is mixing, there exists $N \in \mathbb{N}$ such that $(f_n)^k(\langle U \rangle_n) \cap \langle V \rangle_n \neq \emptyset$ for every $k \ge N$. For each $k \ge N$, let $A_k \in \langle U \rangle_n$ be such that $(f_n)^k(A_k) \in \langle V \rangle_n$ and let $x_k \in A_k \subset U$. Since $f^k(x_k) \in (f_n)^k(A_k) = \langle V \rangle_n$, $f^k(x_k) \in V$. Therefore $f^k(U) \cap V \neq \emptyset$ for each $k \ge N$. This shows that $f$ is mixing. $\square$ + +The following theorem is well known (see for example [1]). The +equivalence a) ⇐⇒ b) is due to H. Furstenberg. + +**Theorem 4.4.** The following statements for a continuous function $f: X \to X$ are equivalent: + +a) f is weakly mixing. \ No newline at end of file diff --git a/samples/texts/7597009/page_25.md b/samples/texts/7597009/page_25.md new file mode 100644 index 0000000000000000000000000000000000000000..a3ddbedb33d0286ab98d7c7c09cca146a12af4ad --- /dev/null +++ b/samples/texts/7597009/page_25.md @@ -0,0 +1,26 @@ +b) For each $m \ge 2$ and every nonempty open subsets $U_1, \dots, U_m$, $V_1, \dots, V_m$ of $X$, there exists $k \in \mathbb{N}$ such that $f^k(U_i) \cap V_i \neq \emptyset$ for each $i \in \{1, \dots, m\}$. + +c) For every nonempty open subsets $U, V_1, V_2$ of $X$, there exists $k \in \mathbb{N}$ such that $f^k(U) \cap V_i \neq \emptyset$ for each $i \in \{1, 2\}$. + +**Theorem 4.5** (compare with Theorem 2 of [2]). *The following statements for a continuous function $f: X \to X$ are equivalent:* + +a) $f$ is weakly mixing. + +b) $f_n$ is weakly mixing for each $n \in \mathbb{N}$. + +c) $f_n$ is transitive for each $n \in \mathbb{N}$. + +d) $f_n$ is weakly mixing for some $n \ge 2$. + +e) $f_n$ is transitive for some $n \ge 2$. + +*Proof.* a) $\Rightarrow$ b). Suppose $f$ is weakly mixing and let $n \in \mathbb{N}$. Let $U_1 = \langle U_1^1, \dots, U_n^1 \rangle_n$, $U_2 = \langle U_1^2, \dots, U_n^2 \rangle_n$, $V_1 = \langle V_1^1, \dots, V_n^1 \rangle_n$ and $V_2 = \langle V_1^2, \dots, V_n^2 \rangle_n$ be nonempty basic open subsets of $\mathcal{F}_n(X)$ (Lema 4.2). Since $f$ is weakly mixing, by Theorem 4.4 there exists $k \in \mathbb{N}$ such that $f^k(U_i^j) \cap V_i^j \neq \emptyset$ for every $i \in \{1, \dots, n\}$ and $j \in \{1, 2\}$. Given $i \in \{1, \dots, n\}$ and $j \in \{1, 2\}$ we choose a point $x_i^j \in f^{-k}(V_i^j) \cap U_i^j$. Let $A_1 = \{x_1^1, \dots, x_n^1\}$ and $A_2 = \{x_1^2, \dots, x_n^2\}$. Clearly $A_j \in U_j$ and $(f_n)^k(A_j) \in V_j$, for each $j \in \{1, 2\}$. Hence $(f_n)^k(U_j) \cap V_j \neq \emptyset$ for each $j \in \{1, 2\}$. This shows that $f_n$ is weakly mixing. + +The implications c) $\Rightarrow$ e) and b) $\Rightarrow$ d) are obvious; b) $\Rightarrow$ c) and +d) $\Rightarrow$ e) follow directly from the definitions. + +e) $\Rightarrow$ a). We use Theorem 4.4. Let $U, V_1$ and $V_2$ be nonempty open subsets of $X$. Let $\mathcal{U} = \langle U \rangle_n$ and $\mathcal{V} = \langle V_1, V_2 \rangle_n$. Since $n \ge 2$, $\mathcal{V} \neq \emptyset$. Since $f_n$ is transitive, there exists $k \in \mathbb{N}$ such that $(f_n)^k(\mathcal{U}) \cap \mathcal{V} \neq \emptyset$. Then there exists $A \in \mathcal{U}$ such that $(f_n)^k(A) \in \mathcal{V}$, then $f^k(A) \cap V_1$ and $f^k(A) \cap V_1$ are both nonempty. Since $A \subset U$ it follows that $f^k(U) \cap V_i \neq \emptyset$ for each $i \in \{1, 2\}$. This shows that $f$ is weakly mixing and finishes the proof of the theorem. $\square$ + +*Notation 4.6.* Given a continuous function $f : X \to X$, per($f$) denotes the set of periodic points of $f$. + +**Lemma 4.7** (compare with Lemma 1 of [2]). For each continuous function $f: X \to X$ and each $n \in \mathbb{N}$, the set $\text{per}(f_n)$ is dense in $\mathcal{F}_n(X)$ if and only if $\text{per}(f)$ is dense in $X$. \ No newline at end of file diff --git a/samples/texts/7597009/page_26.md b/samples/texts/7597009/page_26.md new file mode 100644 index 0000000000000000000000000000000000000000..c048ea3f05225c85d34b7f012c1475c0b334a425 --- /dev/null +++ b/samples/texts/7597009/page_26.md @@ -0,0 +1,26 @@ +*Proof.* (Necessity). Let $n \in \mathbb{N}$. Suppose that $\text{per}(f_n)$ is dense in $\mathcal{F}_n(X)$. Let $x \in X$ and $\varepsilon > 0$. Since $\text{per}(f_n)$ is dense in $\mathcal{F}_n(X)$, there exists $A \in \text{per}(f_n) \cap B_H(\varepsilon, \{x\})$. Since $A \in \text{per}(f_n)$, there exists $k \in \mathbb{N}$ such that $(f_n)^k(A) = A$. This means that $f^k|_A: A \to A$ is a permutation of the elements of $A$. Since $A$ is finite and permutations of finite sets have finite order, there exists $r \in \mathbb{N}$ such that $(f^k)^r|_A = \text{id}_A$. Then $A \subset \text{per}(f)$. Since $A \in B_H(\varepsilon, \{x\})$, we have that $A \subset B(\varepsilon, x)$, this shows that $B(\varepsilon, x) \cap \text{per}(f) \neq \emptyset$ and $\text{per}(f)$ is dense in $X$. + +(Sufficiency). Suppose $\text{per}(f)$ is dense in $X$. Let $n \in \mathbb{N}$, $A = \{x_1, \dots, x_m\} \in \mathcal{F}_n(X)$ and $\varepsilon > 0$. Since $\text{per}(f)$ is dense, for each $i \in \{1, \dots, m\}$, there exist $p_i \in \text{per}(f) \cap B(\varepsilon, x_i)$ and $n_i \in \mathbb{N}$ such that $f^{n_i}(p_i) = p_i$. Take $k = n_1 \cdots n_m$, then $f^k(p_i) = p_i$ for each $i \in \{1, \dots, m\}$. Let $B = \{p_1, \dots, p_m\}$. By construction, $(f_n)^k(B) = \{f^k(p_1), \dots, f^k(p_m)\} = B$ so $B \in \text{per}(f_n)$. Since for every $i \in \{1, \dots, m\}$, $p_i \in B(\varepsilon, x_i)$, we have $B \in B_H(\varepsilon, A)$. This shows that $B \in \text{per}(f_n) \cap B_H(\varepsilon, A)$. Therefore, $\text{per}(f_n)$ is dense in $\mathcal{F}_n(X)$. $\square$ + +**Definition 4.8.** A continuous function $f: X \to X$ is chaotic if it is transitive and $\text{per}(f)$ is dense in $X$. + +**Theorem 4.9.** The following statements for a continuous function $f: X \to X$ are equivalent: + +a) *f is chaotic and weakly mixing.* + +b) *fn is chaotic for some* $n \ge 2$. + +c) *fn is chaotic for each* $n \ge 2$. + +*Proof.* It follows directly from Lemma 4.7 and Theorem 4.5. $\square$ + +**Example 4.10.** There exists a continuum $X$ and a continuous chaotic function $f: X \to X$ such that $f_n: \mathcal{F}_n(X) \to \mathcal{F}_n(X)$ is not chaotic for any $n \ge 2$. + +Let $X = [0, 1]$ and $f: [0, 1] \to [0, 1]$ be defined as follows: + +$$ f(x) = \begin{cases} 2x + \frac{1}{2}, & \text{if } x \in [0, \frac{1}{4}], \\ \frac{3}{2} - 2x, & \text{if } x \in [\frac{1}{4}, \frac{1}{2}], \\ 1 - x, & \text{if } x \in [\frac{1}{2}, 1]. \end{cases} $$ + +By Theorem 4.9 it is enough to show that *f* is chaotic but not +weakly mixing. + +**Claim 1.** *f* is not weakly mixing. \ No newline at end of file diff --git a/samples/texts/7597009/page_27.md b/samples/texts/7597009/page_27.md new file mode 100644 index 0000000000000000000000000000000000000000..a6f64a7def22db11ef706a24fa3bf00db50b00bd --- /dev/null +++ b/samples/texts/7597009/page_27.md @@ -0,0 +1,19 @@ +In order to prove Claim 1, let $U_1 = U_2 = V_1 = (0, \frac{1}{2})$ and $V_2 = (\frac{1}{2}, 1)$. Let $W_1 = [0, \frac{1}{2}]$ and $W_2 = [\frac{1}{2}, 1]$. Clearly, $f(W_1) = W_2$ and $f(W_2) = W_1$. Thus + +$$f^k(W_1) = \begin{cases} W_1, & \text{if } k \text{ is even,} \\ W_2, & \text{if } k \text{ is odd.} \end{cases}$$ + +Suppose that there exists $k \in \mathbb{N}$ such that $f^k(U_1) \cap V_1 \neq \emptyset$ and $f^k(U_2) \cap V_2 \neq \emptyset$. Take points $x \in U_1$ and $y \in U_2$ such that $f^k(x) \in V_1$ and $f^k(y) \in V_2$. If $k$ is odd, then $f^k(x) \in f^k(W_1) = W_2$, so $f^k(x) \in W_2 \cap V_1 = \emptyset$, a contradiction. If $k$ is even, $f^k(y) \in f^k(W_1) = W_1$, so $f^k(y) \in W_1 \cap V_2 = \emptyset$, a contradiction. This proves that $f$ is not weakly mixing. + +**Claim 2.** *f* is chaotic. + +In order to show Claim 2, let's take a look at $f^2$ + +$$f^2(x) = \begin{cases} \frac{1}{2} - 2x, & \text{if } x \in [0, \frac{1}{4}], \\ 2x - \frac{1}{2}, & \text{if } x \in [\frac{1}{4}, \frac{3}{4}], \\ \frac{5}{2} - 2x, & \text{if } x \in [\frac{3}{4}, 1]. \end{cases}$$ + +Note that we can consider $f^2|_{[0, \frac{1}{2}]}: [0, \frac{1}{2}] \to [0, \frac{1}{2}]$ and $f^2|_{[\frac{1}{2}, 1]}: [\frac{1}{2}, 1] \to [\frac{1}{2}, 1]$ as independent dynamical systems. It is easy to see that both of these mappings are topologically conjugate ([12]) to the map known as the “tent map” which was described in Example 3.13 in the previous section. It is known that this mapping is chaotic ([12]). It easily follows from this that *f* is chaotic. + +## 4.2. Specification and the P property. + +**Definition 4.11.** A continuous function $f: X \to X$ has specification if for every $\varepsilon > 0$ there exists $M_\varepsilon$ such that for each $k \ge 2$, for every $x_1, \dots, x_k \in X$ and for every nonnegative integers $a_1 \le b_1 < a_2 \le b_2 < \dots < a_k \le b_k$ such that $a_i - b_{i-1} \ge M_\varepsilon$, there exists $z \in X$ such that for every $i \in \{1, \dots, k\}$ and every $m \in \{a_i, \dots, b_i\}$ we have that $d(f^m(z), f^m(x_i)) < \varepsilon$. + +**Definition 4.12.** A continuous function $f: X \to X$ has the P property if for every pair of nonempty open subsets $U_0, U_1$ of X there is an $N \in \mathbb{N}$ such that for each $k \ge 2$ and each $s = (s(1), \dots, s(k)) \in \{0, 1\}^k$ there exists $x \in X$ such that $x \in U_{s(1)}$, $f^N(x) \in U_{s(2)}$, $f^{2N}(x) \in U_{s(3)}$, ..., $f^{(k-1)N}(x) \in U_{s(k)}$. \ No newline at end of file diff --git a/samples/texts/7597009/page_28.md b/samples/texts/7597009/page_28.md new file mode 100644 index 0000000000000000000000000000000000000000..85661eb211fc768ba300f95e504662b4d2d3dc84 --- /dev/null +++ b/samples/texts/7597009/page_28.md @@ -0,0 +1,31 @@ +The relation between a mapping having one of these properties and the induced mapping to some hyperspaces having it was studied by Rongbao Gu and Wenjing Guo in [14]. The proofs of Theorems 4.1 and 4.2 of [14] can be very easily adapted to prove the following theorems. + +**Theorem 4.13.** *The following statements for a continuous function $f: X \to X$ are equivalent:* + +a) $f$ has specification, + +b) $f_n$ has specification for some $n \ge 2$, + +c) $f_n$ has specification for each $n \ge 2$. + +**Theorem 4.14.** *The following statements for a continuous function $f: X \to X$ are equivalent* + +a) $f$ has the P property, + +b) $f_n$ has the P property for some $n \ge 2$, + +c) $f_n$ has the P property for each $n \ge 2$. + +**4.3. Expansive Homeomorphisms.** Given a homeomorphism $f: X \to X$ we can define its negative iterations as $f^{-n} = (f^{-1})^n$ for each $n \in \mathbb{N}$, where $f^{-1}$ is the inverse of $f$. + +**Definition 4.15.** A homeomorphism $f: X \to X$ is expansive if there exists $c > 0$ such that for every two different points $x, y \in X$ there is $k \in \mathbb{Z}$ such that $d(f^k(x), f^k(y)) > c$. In such case we call $c$ an expansion constant for $f$. + +**Theorem 4.16.** If $f_n : \mathcal{F}_n(X) \to \mathcal{F}_n(X)$ is an expansive homeomorphism for some $n \in \mathbb{N}$, then $f : X \to X$ is an expansive homeomorphism. + +*Proof.* Let $n \in \mathbb{N}$ be such that $f_n$ is an expansive homeomorphism. By Theorem 3.1, $f$ is a homeomorphism. Let $c > 0$ be an expansion constant for $f_n$. We will show that $c$ is also an expansion constant for $f$. + +Given $x \neq y \in X$ we know that $\{x\} \neq \{y\} \in \mathcal{F}_n(X)$ and then there exists a $k \in \mathbb{Z}$ such that + +$$c < H((f_n)^k(\{x\}), (f_n)^k(\{y\})) = d(f^k(x), f^k(y)).$$ + +Thus $c$ is an expansion constant for $f$. Therefore $f$ is an expansive homeomorphism. $\square$ \ No newline at end of file diff --git a/samples/texts/7597009/page_29.md b/samples/texts/7597009/page_29.md new file mode 100644 index 0000000000000000000000000000000000000000..159a0d6d4054a63a956a9e72590e11f1d8d42dc7 --- /dev/null +++ b/samples/texts/7597009/page_29.md @@ -0,0 +1,23 @@ +**4.3.1. Inverse Limits and Shift Homeomorphisms.** + +**Definition 4.17.** An inverse sequence is a sequence of pairs ${{(X_k, f_k)}_{k=1}^\infty}$, where for every $k \in \mathbb{N}$, $X_k$ is a continuum and $f_k: X_{k+1} \to X_k$ is a continuous function. + +**Definition 4.18.** Given an inverse sequence ${{(X_k, f_k)}_{k=1}^\infty}$ we define its inverse limit as: + +$$ \lim_{\leftarrow} {(X_k, f_k)} = \{(x_k)_{k=1}^\infty \in \prod_{k=1}^\infty X_k : f_k(x_{k+1}) = x_k \text{ for every } k \in \mathbb{N}\} $$ + +The metric in $\prod_{k=1}^\infty X_k$ is given by $\rho((x_1, x_2, \dots), (y_1, y_2, \dots)) = \sum_{k=1}^\infty \frac{d_k(x_k, y_k)}{2^k}$, where $d_k$ is a metric for $X_k$ bounded by a number $M$. It is known that the inverse limit of each inverse sequence ${{(X_k, f_k)}_{k=1}^\infty}$, where each $X_k$ is a continuum, is a continuum. + +**Notation 4.19.** Given a continuous function $f: X \to X$ one can consider the inverse sequence ${{X_k, f_k}}_{k=1}^\infty$, where $X_k = X$ and $f_k = f$ for every $k \in \mathbb{N}$. The inverse limit of this particular inverse sequence is simply denoted by $\lim_{\leftarrow} (X, f)$. + +**Definition 4.20.** Given $Y = \lim_{\leftarrow} (X, f)$ we define a function $\hat{f}: Y \to Y$ as $\hat{f}((x_1, x_2, ...)) = (f(x_1), f(x_2), ... ) = (f(x_1), x_1, x_2, ...)$. + +Notice that $(f(x_1), x_1, x_2, ...)$ belongs to $Y$, so $\hat{f}$ is well defined. + +The following theorem is immediate. + +**Theorem 4.21.** Given a continuous function $f: X \to X$, if $Y = \lim_{\leftarrow} (X, f)$, then the function $\hat{f}: Y \to Y$ is a homeomorphism, called the shift homeomorphism, and the inverse of $\hat{f}$ is given by $(\hat{f})^{-1}((x_1, x_2, ...)) = ((x_2, x_3, ...))$. + +**Example 4.22.** There exists a continuum $X$ and an expansive homeomorphism $g: X \to X$ such that $g_n: \mathcal{F}_n(X) \to \mathcal{F}_n(X)$ is not an expansive homeomorphism for any $n \ge 2$. + +Let $n \ge 2$. Consider the unit circle $\mathbb{S}^1$, centered at the origin in the complex plane $\mathbb{C}$. We give to $\mathbb{S}^1$ the metric of the shorter arc, that is, given $z, w \in \mathbb{S}^1$ we define their distance to be $d(z, w) =$ the length of the shortest arc containing $z$ and $w$. \ No newline at end of file diff --git a/samples/texts/7597009/page_3.md b/samples/texts/7597009/page_3.md new file mode 100644 index 0000000000000000000000000000000000000000..54031b28d02ac520cf06b0bb920fabfbf21f354c --- /dev/null +++ b/samples/texts/7597009/page_3.md @@ -0,0 +1,26 @@ +Every mapping between continua $f: X \to Y$ induces a mapping +between each of the respective hyperspaces in the following way: +$2^f: 2^X \to 2^Y$ is defined as $2^f(A) = \{f(a) : a \in A\}$. The induced +mapping to the other hyperspaces is simply the restriction of $2^f$ to +each of such hyperspaces. + +The induced mappings to other hyperspaces such as $2^X$, $C(X)$ and $C_n(X)$ have been previously studied by several authors (see for example [3]-[7], [9]-[11], [16]-[21] and [23]). In this paper we make a systematic study of the induced mapping $f_n: \mathcal{F}_n(X) \to \mathcal{F}_n(Y)$, for a mapping between continua $f: X \to Y$. Given a topological or dynamical property $\mathcal{M}$ that mappings can have, we study under which conditions the fact that $f$ has property $\mathcal{M}$ implies that $f_n$ has property $\mathcal{M}$, and vice versa. + +Some previous basic result have been shown in the thesis [13] +and [29]. + +The authors wish to thank Gerardo Acosta, Mauricio E. Chacón, +Rodrigo J. Hernández, Héctor Méndez-Lango, Juan Mireles, +Christopher Mouron and Norberto Ordóñez for fruitful discussions +on the topic of this paper. + +Besides the introduction and a section of preliminaries, we divide +the paper in two big sections. In the first one, we study traditional +(or topological) properties defined for mappings between continua +such as openness, monoteneity, confluence, etc. In the second one +we study dynamical properties such as transitivity, mixing, chaos, +etc. + +## 2. PRELIMINARIES + +The symbol $\mathbb{N}$ denotes the set of positive integers. All spaces are assumed to be continua unless otherwise stated. Given a continuum $X$, a point $a \in X$ and $\varepsilon > 0$, the $\varepsilon$-ball around $a$ is denoted by $B(\varepsilon, a)$. The $\varepsilon$-ball around $A$ in $\mathcal{F}_n(X)$ is denoted by $B_H(\varepsilon, A)$. Also, for $A \subset X$, we denote the diameter of $A$ by $\text{diam}(A)$ and $N_\varepsilon(A) = \bigcup \{B(\varepsilon, a) : a \in A\}$. All the hyperspaces are considered with the Hausdorff metric (see [22, 2.1, p. 11]). It is known that this topology coincides with the *Vietoris Topology* (see [22, 3.1, p. 16]) defined as follows: Given a finite collection of subsets $U_1, \dots, U_m$ of $X$ we define \ No newline at end of file diff --git a/samples/texts/7597009/page_30.md b/samples/texts/7597009/page_30.md new file mode 100644 index 0000000000000000000000000000000000000000..3076610f65a62131abd11338f95599221796be71 --- /dev/null +++ b/samples/texts/7597009/page_30.md @@ -0,0 +1,30 @@ +This is clearly a metric which is compatible with the norm metric +inherited from $\mathbb{C}$. Let $f: \mathbb{S}^1 \to \mathbb{S}^1$ be defined as $f(z) = z^2$. + +Let $X = \lim_{\leftarrow} (\mathbb{S}^1, f)$. It is known that $\tilde{f}: X \to X$ is an expansive homeomorphism (see for example [31]). We show that $\hat{f}_n : \mathcal{F}_n(X) \to \mathcal{F}_n(X)$ is not an expansive homeomorphism for any $n \ge 2$. + +**Claim 1.** Given $z, w \in \mathbb{S}^1$ there exist $u, v \in \mathbb{S}^1$ such that $f(u) = z$, $f(v) = w$ and $d(u, v) = \frac{d(z,w)}{2}$ + +To prove Claim 1, we consider two cases. + +**Case 1.** The complex number 1 is not in the interior of the shortest arc of $\mathbb{S}^1$ joining $z$ and $w$. + +In this case, we write $z = e^{i\alpha}$ and $w = e^{i\beta}$ with $\alpha, \beta \in [0, 2\pi)$. +Then $d(z, w) = |\alpha - \beta|$. In this case, define $u = e^{i\frac{\alpha}{2}}$ and $v = e^{i\frac{\beta}{2}}$. + +**Case 2.** The complex number 1 is in the interior of the shortest arc of $\mathbb{S}^1$ joining $z$ and $w$. + +In this case, the complex number $-1$ is not in the shortest arc +joining $z$ and $w$. Then we write $z = e^{i\alpha}$ and $w = e^{i\beta}$ with $\alpha, \beta \in (-\pi, \pi)$. In this case, let $u = e^{i\frac{\alpha}{2}}$ and $v = e^{i\frac{\beta}{2}}$. + +The following claim is easy to prove. + +**Claim 2.** Let $z, w \in \mathbb{S}^1$ be such that $d(z, w) < \frac{\pi}{2}$ then $d(f(z), f(w)) = 2d(z, w)$. + +**Claim 3.** The homeomorphism $\hat{f}_n : \mathcal{F}_n(X) \to \mathcal{F}_n(X)$ is not expansive. + +*Proof of Claim 3.* Let $0 < c < \frac{\pi}{4}$. We will show that $c$ is not an expansion constant for $f_n$. Let $p \in \mathbb{N}$ and $z_1, w_1 \in \mathbb{S}^1$ be such that $\frac{\pi}{2p} < c$ and $0 < d(z_1, w_1) < \frac{c}{2p}$. Applying Claim 1, successively, it is possible to construct two sequences $\{z_k\}_{k=1}^\infty$ and $\{w_k\}_{k=1}^\infty$ in $\mathbb{S}^1$ such that $f(z_{k+1}) = z_k$, $f(w_{k+1}) = w_k$ and $d(z_{k+1}, w_{k+1}) = \frac{d(z_1, w_1)}{2^k}$ for every $k \in \mathbb{N}$. + +Notice that the points $z = (z_1, z_2, \dots)$ and $w = (w_1, w_2, \dots)$ belong to $X$. + +First, we will show that $\rho(\hat{f}^m(z), \hat{f}^m(w)) < c$ for every $m \in \{\dots, p-1, p\}$. \ No newline at end of file diff --git a/samples/texts/7597009/page_31.md b/samples/texts/7597009/page_31.md new file mode 100644 index 0000000000000000000000000000000000000000..323df818837944e8f5c04b128e5f601daa93ebf7 --- /dev/null +++ b/samples/texts/7597009/page_31.md @@ -0,0 +1,33 @@ +By the definition of $(\hat{f})^{-1}$ we have that, for each $m \in \{0, 1, \dots\}$, + +$$ +\begin{aligned} +\rho(\hat{f}^{-m}(z), \hat{f}^{-m}(w)) &= \rho((z_{m+1}, z_{m+2}, \dots), (w_{m+1}, w_{m+2}, \dots)) \\ +&= \sum_{k=1}^{\infty} \frac{d(z_{m+k}, w_{m+k})}{2^k} = \sum_{k=1}^{\infty} \frac{\frac{d(z_1, w_1)}{2^{m+k}}}{2^k} \\ +&= \sum_{k=1}^{\infty} \frac{d(z_1, w_1)}{2^{2k+m}} \le \sum_{k=1}^{\infty} \frac{d(z_1, w_1)}{2^k} \\ +&= d(z_1, w_1) < \frac{c}{2^p} < c. +\end{aligned} +$$ + +Hence, we have shown that $\rho(\hat{f}^m(z), \hat{f}^m(w)) < c$ for each $m \in \{-2, -1, 0\}$. + +Next, by induction, we will show that for each $m \in \{0, 1, \dots, p\}$, +$d(f^m(z_1), f^m(w_1)) = 2^md(z_1, w_1)$. The case $m=0$ is immediate. +Suppose that $m \in \{0, 1, \dots, p-1\}$ is such that $d(f^m(z_1), f^m(w_1)) =$ +$2^md(z_1, w_1)$. Since $m 0$ such that for each nondegenerate subcontinuum $Y$ of $X$, there exists $k \in \mathbb{Z}$ such that $\text{diam}(f^k(Y)) > c$. In such case $c$ is a continuum-wise expansion constant for $f$. + +**Lemma 4.24.** Let $\mathcal{K}$ be a nondegenerate subcontinuum of $\mathcal{F}_n(X)$ and let $D$ be a component of the set $\bigcup \{E : E \in \mathcal{K}\}$. Then $\text{diam}(\mathcal{K}) \ge \frac{\text{diam}(D)}{2n}$. + +*Proof.* Fix $A = \{a_1, \dots, a_m\} \in \mathcal{K}$ and let $r < \frac{\text{diam}(D)}{2n}$. For every $i \in \{1, \dots, m\}$ let $U_i = B(r, a_i) \cap D$. Then $U_i = \emptyset$ or $\text{diam}(U_i) \le 2r$. + +We show that $U_1 \cup \dots \cup U_m \subsetneq D$. Suppose, to the contrary, that $D = U_1 \cup \dots \cup U_m$. Let $x, y \in D$ be such that $d(x, y) = \text{diam}(D)$. Let $i_0, j_0 \in \{1, \dots, m\}$ be such that $x \in U_{i_0}$ and $y \in U_{j_0}$. Since $D$ is connected and every $U_i$ is open in $D$, there exists $\{i_0, i_1, \dots, i_k\} \subset \{1, \dots, m\}$ such that $i_k = j_0$, $i_p \ne i_q$ for every $p \ne q$ and $U_{i_p} \cap U_{i_{p+1}} \ne \emptyset$ for each $p \in \{0, 1, \dots, k-1\}$. This is a chain of $k+1$ open subsets from $x$ to $y$ and every link has diameter at most $2r$. Then, $d(x, y) \le 2r(k+1) \le 2rm \le 2rn < \text{diam}(D)$; this is a contradiction. Therefore $U_1 \cup \dots \cup U_m \subsetneq D$. + +Let $b \in D \setminus (U_1 \cup \dots \cup U_m) = D \setminus \bigcup \{B(r, a_i) : i \in \{1, \dots, m\}\}$. This implies that $a_i \notin B(r, b)$, for any $i \in \{1, \dots, m\}$. Let $B \in \mathcal{K}$ be such that $b \in B$. Notice that $H(A, B) \ge r$. Then $\text{diam}(\mathcal{K}) \ge r$. Since $r$ was an arbitrary number less than $\frac{\text{diam}(D)}{2n}$, we conclude that $\text{diam}(\mathcal{K}) \ge \frac{\text{diam}(D)}{2n}$. $\square$ + +**Theorem 4.25.** The following statements for a continuous function $f: X \to X$ are equivalent: + +a) $f$ is a continuum-wise expansive homeomorphism, + +b) $f_n$ is a continuum-wise expansive homeomorphism for some $n \ge 2$, + +c) $f_n$ is a continuum-wise expansive homeomorphism for each $n \ge 2$. + +*Proof.* b) ⇒ a). Suppose that $f_n$ is a continuum-wise expansive homeomorphism and let $c > 0$ be a continuum-wise expansion constant for $f_n$. We claim that $c$ is an continuum-wise expansion constant for $f$. Let $\mathcal{K}$ be a nondegenerate subcontinuum of $X$. \ No newline at end of file diff --git a/samples/texts/7597009/page_34.md b/samples/texts/7597009/page_34.md new file mode 100644 index 0000000000000000000000000000000000000000..b1f98277d0772232f4d9d8545c7c40b92e0bff97 --- /dev/null +++ b/samples/texts/7597009/page_34.md @@ -0,0 +1,19 @@ +Then $\mathcal{F}_1(K)$ is a nondegenerate subcontinuum of $\mathcal{F}_n(X)$. Therefore there exists $k \in \mathbb{Z}$ such that $\text{diam}((f_n)^k(\mathcal{F}_1(K))) > c$. Clearly $\text{diam}(f^k(K)) = \text{diam}((f_n)^k(\mathcal{F}_1(K)))$. Therefore $c$ is a continuum-wise expansion constant for $f$. + +a) $\Rightarrow$ c). Let $n \in \mathbb{N}$. Suppose that $f$ is a continuum-wise expansive homeomorphism and let $c > 0$ be a continuum-wise expansion constant for $f$. We claim that $\frac{c}{2n}$ is a continuum-wise expansion constant for $f_n$. Let $\mathcal{K}$ be a nondegenerate subcontinuum of $\mathcal{F}_n(X)$. Lemma 2.1 implies that $\bigcup \{E : E \in \mathcal{K}\}$ has at most $n$ components, but $\mathcal{K}$ has infinitely many elements, so one of those components must be nondegenerate. Let $D$ be one nondegenerate component of $\bigcup \{E : E \in \mathcal{K}\}$. Since $c$ is a continuum-wise expansion constant for $f$, there exists $k \in \mathbb{Z}$ such that $\text{diam}(f^k(D)) > c$. Given $a \in f^k(D)$ there exists $x \in D$ such that $f^k(x) = a$ and since $D \subset \bigcup \{E : E \in \mathcal{K}\}$ there is $A \in \mathcal{K}$ such that $x \in A$. Then $a = f^k(x) \in (f_n)^k(A) \in (f_n)^k(\mathcal{K})$. This shows that $f^k(D) \subset \bigcup \{E : E \in (f_n)^k(\mathcal{K})\}$. Let $E_0$ be the component of $\bigcup \{E : E \in (f_n)^k(\mathcal{K})\}$ that contains $f^k(D)$. Then $\text{diam}(E_0) \ge \text{diam}(f^k(D)) > c$. Lemma 4.24 implies that $\text{diam}((f_n)^k(\mathcal{K})) \ge \frac{\text{diam}(E_0)}{2n}$ and therefore $\text{diam}((f_n)^k(\mathcal{K})) > \frac{c}{2n}$. This shows that $\frac{c}{2n}$ is a continuum-wise expansion constant for $f_n$. Therefore, $f_n$ is a continuum-wise expansive homeomorphism. $\square$ + +## ACKNOWLEDGMENT + +The authors wish to thank Leonardo Espinosa for his technical help during the preparation of this paper. + +## REFERENCES + +[1] J. Banks, *Topological mapping properties defined by digraphs*, Discrete Con- tin. Dynam. Systems **5** No. 1 (1999), 83–92. + +[2] J. Banks, *Chaos for induced hyperspace maps*, Chaos Solitons and Fractals **25** (2005), 681–685. + +[3] J. J. Charatonik and W. J. Charatonik, *Hereditarily weakly confluent map- pings are homeomorphisms*, Colloq. Math. **75** (1998), 195–203. + +[4] J. J. Charatonik and W. J. Charatonik, *Lightness of the induced mappings*, Tsukuba J. Math. **22** (1998), 179–192. + +[5] J. J. Charatonik and W. J. Charatonik, *Induced MO-mappings*, Tsukuba J. Math. **23** (1999), 245–252. \ No newline at end of file diff --git a/samples/texts/7597009/page_35.md b/samples/texts/7597009/page_35.md new file mode 100644 index 0000000000000000000000000000000000000000..af88000e81416a934560b6541ceef40550938aa3 --- /dev/null +++ b/samples/texts/7597009/page_35.md @@ -0,0 +1,41 @@ +[6] J. J. Charatonik and W. J. Charatonik, *Limit properties of induced mappings*, Topology Appl. **100** (2000), 103–118. + +[7] J. J. Charatonik, W. J. Charatonik and A. Illanes, *Openness of induced mappings*, Topology Appl. **98** (1999), 67–80. + +[8] J. J. Charatonik and A. Illanes, *Local connectedness in hyperspaces*, Rocky Mountain J. Math. **36** (2006), 811–856. + +[9] J. J. Charatonik, A. Illanes and S. Macías, *Induced mappings on the hyperspaces $C_n(X)$ of a continuum X*, Houston J. Math. **28** (2002), 781–805. + +[10] W. J. Charatonik, *Arc approximation property and confluence of induced mappings*, Rocky Mountain J. Math. **28** (1998), 107–154. + +[11] W. J Charatonik, *Openness and monotoneity of induced mappings*, Proc. Amer. Math. Soc. **127** (1999), 3729–3731. + +[12] R. L. Devaney, *An introduction to chaotic dynamical systems*, 2nd ed., Addison-Wesley, 1989. + +[13] J. L. Gómez-Rueda, *Funciones inducidas entre continuos*, Tesis de Licenciatura, Facultad de Ciencias, Universidad Nacional Autónoma de México, July 2002. + +[14] R. Gu, W. Guo, *On mixing property in set-valued discrete systems*, Chaos Solitons and Fractals **28** (2006), 747–754. + +[15] G. Higuera and A. Illanes, *Fixed Point Property on Symmetric Products*, preprint. + +[16] H. Hosokawa, *Induced mappings between hyperspaces*, Bull. Tokyo Gakugei Univ. (4) **41** (1989), 1–6. + +[17] H. Hosokawa, *Mappings of hyperspaces induced by refinable mappings*, Bull. Tokyo Gakugei Univ. (4) **42** (1990), 1–8. + +[18] H. Hosokawa, *Induced mappings between hyperspaces II*, Bull. Tokyo Gakugei Univ. (4) **44** (1992), 1–7. + +[19] H. Hosokawa, *Induced mappings on hyperspaces*, Tsukuba J. Math. **21** (1997), 239–250. + +[20] H. Hosokawa, *Induced mappings on hyperspaces II*, Tsukuba J. Math. **21** (1997), 773–783. + +[21] A. Illanes, *The openness of induced mappings on hyperspaces*, Colloq. Math. **74** (1997), 219–224. + +[22] A. Illanes and S. B. Nadler Jr., *Hyperspaces, fundamentals and recent advances*, Monographs and Texbooks in Pure and Applied Math., Vol. 216, Marcel Dekker Inc., New York and Basel, (1999). + +[23] A. Y. W. Lau, *A note on monotone maps and hyperspaces*, Bull. Acad. Polon. Sci. Ser. Sci. Math. Astronom. Phys. **24** (1976), 121–123. + +[24] A. Lelek, D. R. Read, *Compositions of confluent mappings and some other classes of functions*, Colloq. Math. **29** (1974), 101–112. + +[25] J. M. Martinez-Montejano, *Non-confluence of the natural map of products onto symmetric products*, Continuum theory (Denton, TX, 1999), 229–236, Lecture Notes in Pure and Appl. Math., 230, Dekker, New York, 2002. + +[26] S. B. Nadler Jr., *Induced universal maps and some hyperspaces with the fixed point property*, Proc. Amer. Math. Soc. **100** (1987), 749–754. \ No newline at end of file diff --git a/samples/texts/7597009/page_36.md b/samples/texts/7597009/page_36.md new file mode 100644 index 0000000000000000000000000000000000000000..581049d328924bc73a72dd0a592649fd539605cb --- /dev/null +++ b/samples/texts/7597009/page_36.md @@ -0,0 +1,15 @@ +[27] S. B. Nadler Jr., *Continuum theory: An introduction*, Monographs and Texbooks in Pure and Applied Math., Vol. 158, Marcel Dekker Inc., New York, N.Y. 1992. + +[28] J. Oledski, *On symmetric products*, Fund. Math. **131** (1988), 185–190. + +[29] J. Sánchez-Martínez, *Cociente de productos simétricos de un continuo*, Tesis de licenciatura, Universidad Autónoma del Estado de México, mayo de 2009. + +[30] G. T. Whyburn, *Analytic Topology*, New York, 1942. + +[31] R. F. Williams, *A note on unstable homeomorphisms*, Proc. Amer. Math. Soc. **6** (1955), 308–309. + +INSTITUTO DE MATEMÁTICAS, UNIVERSIDAD NACIONAL AUTÓNOMA DE MÉXICO, CIRCUITO EXTERIOR, CD. UNIVERSITARIA, MÉXICO 04510, D.F. MÉXICO + +*E-mail address: illanes@matem.unam.mx* + +*E-mail address: galoolag@gmail.com* \ No newline at end of file diff --git a/samples/texts/7597009/page_4.md b/samples/texts/7597009/page_4.md new file mode 100644 index 0000000000000000000000000000000000000000..69b40b0b659660e41040ce3f42db4e19f5d047a8 --- /dev/null +++ b/samples/texts/7597009/page_4.md @@ -0,0 +1,35 @@ +$$ +\langle U_1, \dots, U_m \rangle = \\ +\{ A \in 2^X : A \subset \bigcup_{i=1}^{m} U_i \text{ and } A \cap U_i \neq \emptyset \text{ for each } i \in \{1, \dots, m\} \}. +$$ + +The family $\{\langle U_1, \ldots, U_m \rangle : m \in \mathbb{N} \text{ and } U_i \text{ is open in } X \text{ for each } i \in \{1, \ldots, m\}\}$ is a basis for the Vietoris topology. We define $\langle U_1, \ldots, U_m \rangle_n = \langle U_1, \ldots, U_m \rangle \cap \mathcal{F}_n(X)$. + +Proceeding as in [8, Lemma 2.1], the following lemma can be +proved. + +**Lemma 2.1.** Let $A$ be a connected, closed subset of $F_n(X)$ such that $A \cap F_m(X) \neq \emptyset$, for some $m \le n$. Let $A = \bigcup\{B : B \in A\}$. Then $A \in C_m(X)$ and every component of $A$ intersects every element of $A \cap F_m(X)$. + +We will use the following particular case of [25, Lemma 1]. + +**Lemma 2.2.** Let $C_1, \dots, C_m$ be pairwise disjoint subcontinua of $X$. Suppose that $m \le n$. Then $\langle C_1, \dots, C_m \rangle_n$ is a subcontinuum of $F_n(X)$. + +**Definition 2.3.** Given $A, B \in C(X)$, with $A \subsetneq B$, we say that a continuous function $\alpha: [0, 1] \to C(X)$ is an *order arc* from $A$ to $B$ in $C(X)$ if $\alpha(0) = A$, $\alpha(1) = B$ and $\alpha(s) \subsetneq \alpha(t)$ for every $0 \le s < t \le 1$. + +The existence of order arcs is a very well known fact of the theory +of hyperspaces and it is stated in the following theorem (see [22, +14.6, p. 112]). + +**Theorem 2.4.** Given $A, B \in C(X)$ such that $A \subsetneq B$, there exists an order arc from $A$ to $B$ in $C(X)$. + +3. TRADITIONAL PROPERTIES + +3.1. **Homeomorphisms.** The following theorem is immediate. + +**Theorem 3.1.** The following statements for a mapping $f: X \rightarrow Y$ are equivalent: + +a) *f* is a homeomorphism, + +b) *f**n* is a homeomorphism, for some *n* ∈ N, + +c) *f**n* is a homeomorphism, for every *n* ∈ N. \ No newline at end of file diff --git a/samples/texts/7597009/page_5.md b/samples/texts/7597009/page_5.md new file mode 100644 index 0000000000000000000000000000000000000000..2e5f2b7dba4bd3a918178d08032a55ed3e3a7696 --- /dev/null +++ b/samples/texts/7597009/page_5.md @@ -0,0 +1,23 @@ +**3.2. Monotone and Open Mappings.** + +**Definition 3.2.** We say that a map $f: X \to Y$ is monotone if $f^{-1}(y)$ is a connected subset of $X$ for every $y \in Y$. + +**Theorem 3.3.** The following statements for a mapping $f: X \to Y$ are equivalent: + +a) $f$ is monotone, + +b) $f_n$ is monotone for some $n \in \mathbb{N}$, + +c) $f_n$ is monotone for every $n \in \mathbb{N}$. + +*Proof.* a) ⇒ c). Suppose that $f: X \to Y$ is monotone and let $n \in \mathbb{N}$. Let $B = \{y_1, \dots, y_m\} \in \mathcal{F}_n(Y)$ where $m \le n$. For every $i \in \{1, \dots, m\}$, let $C_i = f^{-1}(y_i)$. Since $f$ is monotone, $C_i$ is a subcontinuum of $X$. In addition, the sets $C_1, \dots, C_m$ are pairwise disjoint. It follows from Lemma 2.2 that the set $\mathcal{A} = \langle C_1, \dots, C_m \rangle_n$ is a subcontinuum of $\mathcal{F}_n(X)$. It is easy to show that $\mathcal{A} = f_n^{-1}(B)$. This concludes the proof that $f_n^{-1}(B)$ is connected and shows that $f_n$ is monotone. + +b) ⇒ a). Suppose that $f_n$ is monotone for some $n \in \mathbb{N}$. Let $y \in Y$. Then $f_n^{-1}(\{y\})$ is a subcontinuum of $\mathcal{F}_n(X)$. Let $A = \bigcup\{B : B \in f_n^{-1}(\{y\})\}$. Since $f$ is surjective, there exists $x \in X$ such that $f(x) = y$. Then $\{x\} \in f_n^{-1}(\{y\}) \cap \mathcal{F}_1(X)$. By Lemma 2.1, $A$ is connected. Clearly, $A = f^{-1}(y)$. Thus $f^{-1}(y)$ is connected. Hence $f$ is monotone. □ + +**Definition 3.4.** A mapping $f: X \to Y$ is open if $f(U)$ is open in $Y$ for every open subset $U$ of $X$. + +**Theorem 3.5.** A mapping $f: X \to Y$ is open if and only if the mapping $f_2: \mathcal{F}_2(X) \to \mathcal{F}_2(Y)$ is open. + +*Proof.* (Necessity). Suppose that $f$ is open. Consider an open subset $\mathcal{U}$ of $\mathcal{F}_2(X)$. Pick an element $f_2(A) \in f_2(\mathcal{U})$, where $A \in \mathcal{U}$. We put $A = \{p, q\}$, where possibly $p = q$. Since $\mathcal{U}$ is open, there exists $\varepsilon > 0$ such that $B_H(\varepsilon, A) \subset \mathcal{U}$. Since $f$ is open, there exists $\delta > 0$ such that $B(\delta, f(p)) \subset f(B(\varepsilon, p))$ and $B(\delta, f(q)) \subset f(B(\varepsilon, q))$. If $f(p) \neq f(q)$, we can also ask that $B(\delta, f(p)) \cap B(\delta, f(q)) = \emptyset$. + +We claim that $B_H(\delta, f_2(A)) \subset f_2(\mathcal{U})$. Let $B = \{w, z\} \in B_H(\delta, f_2(A))$. Then $H(\{w, z\}, \{f(p), f(q)\}) < \delta$. In the case that $f(p) \neq f(q)$, we may assume that $w \in B(\delta, f(p))$ and $z \in B(\delta, f(q))$. \ No newline at end of file diff --git a/samples/texts/7597009/page_6.md b/samples/texts/7597009/page_6.md new file mode 100644 index 0000000000000000000000000000000000000000..2723f4f289c351d94035483087908bc1c348979f --- /dev/null +++ b/samples/texts/7597009/page_6.md @@ -0,0 +1,17 @@ +In the case that $f(p) = f(q)$, then $w, z \in B(\delta, f(p)) = B(\delta, f(q))$. +In both cases, we may assume that $w \in B(\delta, f(p)) \subset f(B(\varepsilon, p))$ +and $z \in B(\delta, f(q)) \subset f(B(\varepsilon, q))$. Then there exist $u \in B(\varepsilon, p)$ +and $x \in B(\varepsilon, q)$ such that $f(u) = w$ and $f(x) = z$. Notice that +$H(\{u, x\}, \{p, q\}) < \varepsilon$. Hence $\{u, x\} \in \mathcal{U}$ and $f_2(\{u, x\}) = \{w, z\}$. +Thus $B \in f_2(\mathcal{U})$. This shows that $B_H(\delta, f_2(A)) \subset f_2(\mathcal{U})$. Hence +$f_2(\mathcal{U})$ is open. + +(Sufficiency). Suppose that $f_2$ is open and let $U$ be an open subset of $X$. Given $p \in U$, we have $\{p\} \in \langle U \rangle_2$. Since $f_2$ is open, $f_2(\langle U \rangle_2)$ is an open subset of $\mathcal{F}_2(Y)$ that has the element $f_2(\{p\}) = \{f(p)\}$, so there exists $\varepsilon > 0$ such that $B_H(\varepsilon, \{f(p)\}) \subset f_2(\langle U \rangle_2)$. + +We claim that $B(\varepsilon, f(p)) \subset f(U)$. Let $y \in B(\varepsilon, f(p))$. Then $\{y\} \subset f_2(\langle U \rangle_2)$. So there exists $B \in \langle U \rangle_2$ such that $\{y\} = f_2(B)$. Pick a point $b \in B$. Then $b \in U$ and $f(b) = y$. This shows that $y \in f(U)$. We have shown that $B(\varepsilon, f(p)) \subset f(U)$. Hence $f(U)$ is open in $Y$. Therefore $f$ is open. $\square$ + +**Theorem 3.6.** If $f: X \to Y$ is a mapping such that $Y$ is nondegenerate and $f_n: \mathcal{F}_n(X) \to \mathcal{F}_n(Y)$ is open for some $n \ge 3$, then $f$ is a homeomorphism. + +*Proof.* It suffices to show that *f* is one-to-one. Suppose, to the contrary, that there exist two points *x*₁ ≠ *x*₂ in *X* such that *f*(*x*₁) = *f*(*x*₂). Since *Y* is nondegenerate and *f* is surjective, there exists *x*₃ ∈ *X* such that *f*(*x*₃) ≠ *f*(*x*₁). Let ε > 0 be such that *B*(ε, *f*(*x*₁)) ∩ *B*(ε, *f*(*x*₃)) = ∅. By the continuity of *f*, there exists δ > 0 such that the sets *B*(δ, *x*₁), *B*(δ, *x*₂) and *B*(δ, *x*₃) are pairwise disjoint, *f*(*B*(δ, *x*₃)) ⊂ *B*(ε, *f*(*x*₃)) and *f*(*B*(δ, *x*₁)) ∪ *f*(*B*(δ, *x*₂)) ⊂ *B*(ε, *f*(*x*₁)). + +Since $f_n$ is open, the set $f_n(B_H(\delta, \{x_1, x_2, x_3\}))$ is an open subset of $\mathcal{F}_n(Y)$ that has the element $\{f(x_1), f(x_3)\}$. Hence there exists $\eta > 0$ such that $B_H(\eta, \{f(x_1), f(x_3)\}) \subset f_n(B_H(\delta, \{x_1, x_2, x_3\}))$. We may assume that $\eta < \epsilon$. Pick $n-1$ different points $y_1, \dots, y_{n-1} \in B(\eta, f(x_3)) \setminus \{f(x_3)\}$. Let $B = \{f(x_1), y_1, \dots, y_{n-1}\}$. Notice that $B \in B_H(\eta, \{f(x_1), f(x_3)\})$ and since this set is contained in $f_n(B_H(\delta, \{x_1, x_2, x_3\}))$, there exists $A \in B_H(\delta, \{x_1, x_2, x_3\})$ such that $f(A) = B$. Then there exist $a_1, a_2 \in A$ such that $a_1 \in B(\delta, x_1)$ and $a_2 \in B(\delta, x_2)$. Then $a_1 \neq a_2$. In addition, there exists $u_1, \dots, u_{n-1} \in A$ such that $f(u_1) = y_1, \dots, f(u_{n-1}) = y_{n-1}$. Since \ No newline at end of file diff --git a/samples/texts/7597009/page_7.md b/samples/texts/7597009/page_7.md new file mode 100644 index 0000000000000000000000000000000000000000..5e8e1f3e22c4fb31c540c9fde4bb76075afdead1 --- /dev/null +++ b/samples/texts/7597009/page_7.md @@ -0,0 +1,15 @@ +the points $y_1, \dots, y_{n-1}$ are pairwise different, the points $u_1, \dots, u_{n-1}$ are pairwise different. Given $i \in \{1, \dots, n-1\}$, $f(u_i) = y_i \in B(\eta, f(x_3)) \subset B(\varepsilon, f(x_3))$. Hence $f(u_i) \notin B(\varepsilon, f(x_1))$ and $f(u_i) \notin f(B(\delta, x_1)) \cup f(B(\delta, x_2))$. This implies that $u_i \notin B(\delta, x_1) \cup B(\delta, x_2)$. Hence $u_i \neq a_1, a_2$. Then all of the points $a_1, a_2, u_1, \dots, u_{n-1}$ are different and all of them are elements of A. This is absurd since $A \in \mathcal{F}_n(X)$. This contradiction shows that $f$ is one-to-one. Therefore $f$ is a homeomorphism. $\square$ + +**Definition 3.7.** A mapping $f: X \to Y$ is OM (respectively, MO) if there exist a continuum $Z$ and mappings $g: X \to Z$ and $h: Z \to Y$ such that $f = h \circ g$, $g$ is monotone and $h$ is open (respectively, $g$ is open and $h$ is monotone). + +**Definition 3.8.** Given a sequence $\{A_m\}_{m=1}^{\infty}$ of subsets of $X$ define $\limsup_{m \to \infty} A_m$ as the set of points $x \in X$ such that there exists a sequence of positive numbers $m_1 < m_2 < \dots$ and there exist points $x_{m_k} \in A_{m_k}$ such that $\lim x_{m_k} = x$. + +The following characterization of OM mappings was showed in [24, 2.2, p. 102, and Corollary 3.1, p. 104] by A. Lelek and D. R. Read. + +**Lemma 3.9.** A mapping $f: X \to Y$ is OM if and only if, for every sequence $\{y_m\}_{m=1}^{\infty}$ in $Y$ that converges to a point $y \in Y$, we have that $\limsup_{m \to \infty} f^{-1}(y_m)$ meets every component of $f^{-1}(y)$. + +**Theorem 3.10.** If $f_n : \mathcal{F}_n(X) \to \mathcal{F}_n(Y)$ is OM for some $n \in \mathbb{N}$, then $f$ is OM. + +*Proof*. Let $\{y_m\}_{m=1}^{\infty}$ be a sequence in $Y$ that converges to a point $y \in Y$. Let $C$ be a component of $f^{-1}(y)$ and let $C$ be the component of $f_n^{-1}(\{y\})$ that contains $\mathcal{F}_1(C)$. Then since $C$ is connected and $\mathcal{F}_1(C) \subset C$, it follows from Lemma 2.1 that $M = \bigcup\{E : E \in C\}$ is connected. We will show that $M = C$. + +Given $x \in C$, $\{x\} \in \mathcal{F}_1(C) \subset C$, so $x \in M$. Hence $C \subset M$. If $x \in M$ there exists $A \in C$ such that $x \in A$. Since $A \in C \subset f_n^{-1}(\{y\})$, $f_n(A) = \{y\}$, so $f(x) = y$. We have that $M \subset f^{-1}(\{y\})$ and $M$ is connected. Since $C \subset M$ and $C$ is a component of $f^{-1}(\{y\})$, we have $C = M$. \ No newline at end of file diff --git a/samples/texts/7597009/page_8.md b/samples/texts/7597009/page_8.md new file mode 100644 index 0000000000000000000000000000000000000000..97bbd9819bd8aae924dca50cdc9bc87438ba42a3 --- /dev/null +++ b/samples/texts/7597009/page_8.md @@ -0,0 +1,35 @@ +Since $f_n$ is OM and the sequence $\{{{y_m}}\}_{m = 1}^\infty$ converges to $\{y\}$, +there exists an element $A \in (\limsup_{m \to \infty} f_n^{-1}(\{y_m\})) \cap C$. Thus +there exist positive integers $m_1 < m_2 < \dots$ and elements $A_{m_k} \in$ +$f_n^{-1}(\{y_{m_k}\})$ such that $\lim A_{m_k} = A$. Fix $x \in A$. Then there exist +points $x_{m_k} \in A_{m_k}$ such that $\lim x_{m_k} = x$ ([22, 4.5, p. 25]). For +each $k \in \mathbb{N}$, $f_n(A_{m_k}) = \{y_{m_k}\}$, then $f(x_{m_k}) = y_{m_k}$, so $x_{m_k} \in$ +$f^{-1}(y_{m_k})$. Since $\lim x_{m_k} = x$, $x \in \limsup_{m \to \infty} f^{-1}(y_m)$. We also +know that $x \in A \in C$ and that $M = C$, therefore $x \in C$. Then +$x \in (\limsup_{m \to \infty} f^{-1}(y_m)) \cap C$. + +We have shown that, for each component $C$ of $f^{-1}(y)$ and each +sequence ${y_m}_{m=1}^{\infty}$ in $Y$ that converges to $y \in Y$, we have that +$C \cap (\limsup_{m \to \infty} f^{-1}(y_m)) \neq \emptyset$. By Lemma 3.9 we conclude that +$f$ is OM. $\square$ + +The next natural question is if the converse of Theorem 3.10 +holds. For the case $n = 2$ we have already done all the necessary +work to assure it does. + +**Theorem 3.11.** If *f* is OM (**MO**), then *f*₂ is OM (**MO**). + +*Proof.* If *f* is OM, then there exists a continuum *Z* and mappings *g*: *X* → *Z* and *h*: *Z* → *Y* such that *f* = *h* ○ *g*, *g* is monotone and *h* is open. Consider then the continuum *F*₂(*Z*) and the mappings *g*₂: *F*₂(*X*) → *F*₂(*Z*) and *h*₂: *F*₂(*Z*) → *F*₂(*Y*). By Theorems 3.3 and 3.5 we know that *g*₂ is monotone and *h*₂ is open. And it is clear that *f*₂ = *h*₂○*g*₂, thus *f*₂ is OM. The proof for **MO** is analogous. □ + +**Corollary 3.12.** A mapping *f*: *X* → *Y* is OM if and only if *f*₂: *F*₂(*X*) → *F*₂(*Y*) is OM. + +As in the case of open mappings, Corollary 3.12 cannot be ex- +tended for the case *n* ≥ 3 (compare with Proposition 9 of [5]). + +**Example 3.13.** There exist a continuum X and a mapping f : X → X that is OM and MO and, for each n ≥ 3, the induced mapping f_n : F_n(X) → F_n(X) is neither OM nor MO. + +Let $X = [0, 1]$ and $f: [0, 1] \rightarrow [0, 1]$ be defined by: + +$$ +f(x) = \begin{cases} 2x, & \text{if } x \in [0, \frac{1}{2}], \\ 2 - 2x, & \text{if } x \in [\frac{1}{2}, 1]. \end{cases} +$$ \ No newline at end of file diff --git a/samples/texts/7597009/page_9.md b/samples/texts/7597009/page_9.md new file mode 100644 index 0000000000000000000000000000000000000000..4b60772ce5b2a95658ffd2405dcfcf60b5e226c4 --- /dev/null +++ b/samples/texts/7597009/page_9.md @@ -0,0 +1,15 @@ +The map $f$ is known as the tent map. Since $f$ is an open mapping and $f = f \circ \text{id} = \text{id} \circ f$, where $\text{id}: [0,1] \to [0,1]$ is the identity mapping, we conclude that $f$ is OM and MO. + +Suppose that $n \ge 3$ and that $f_n$ is OM. Then there exist a continuum $Z$ and mappings $g: \mathcal{F}_n([0,1]) \to Z$ and $h: Z \to \mathcal{F}_n([0,1])$ such that $g$ is monotone, $h$ is open and $f_n = h \circ g$. Notice that $f^{-1}(y)$ has at most two points for each $y \in [0,1]$. It follows that, for every finite subset $M$ of $[0,1]$, the set $f^{-1}(M)$ is finite, and also the set $\mathcal{P}(f^{-1}(M)) = \{W : W \subset M\}$ is finite. Given $A \in \mathcal{F}_n([0,1])$, it is clear that $f_n^{-1}(A) \subset \mathcal{P}(f^{-1}(A))$. Hence $f_n^{-1}(A)$ is finite. + +Given $z \in Z$ we have $f_n(g^{-1}(z)) = (h \circ g)(g^{-1}(z)) = h(z)$, so $g^{-1}(z) \subset f_n^{-1}(h(z))$. Since $g$ is monotone and $f_n^{-1}(h(z))$ is finite, $g^{-1}(z)$ is a one-point set. Thus $g$ is one-to-one, then $g$ is a homeomorphism. In particular, $g$ is open. Since $h$ is open, by hypothesis, we have that $f_n = h \circ g$ is open. Since $n \ge 3$, it follows from Theorem 3.6 that $f$ is a homeomorphism, a contradiction. This shows that $f$ is not an OM mapping. + +The following theorem shows that $f_n$ cannot be an MO mapping. □ + +**Theorem 3.14.** Let $n \ge 3$ and $f: X \to Y$ be a mapping such that $Y$ is nondegenerate and $f_n$ is MO, then $f$ is monotone. + +*Proof.* Suppose to the contrary that $f$ is not monotone. Then there exists $y_1 \in Y$ and nonempty disjoint compact subsets $K$ and $L$ of $X$ such that $f^{-1}(y_1) = K \cup L$. Fix a point $y_2 \in Y - \{y_1\}$ and a sequence $\{v_m\}_{m=1}^\infty$ of pairwise different elements in $Y - \{y_1, y_2\}$ such that $\lim_{m} v_m = y_2$. For each $m \in \mathbb{N}$, choose a point $u_m \in X$ such that $f(u_m) = v_m$. We may assume that $\lim_{m} u_m = x_2$ for some point $x_2 \in X$. Thus $f(x_2) = y_2$. + +Let $Z$ be a continuum and let $g: \mathcal{F}_n(X) \to Z$, $h: Z \to \mathcal{F}_n(Y)$ be mappings such that $g$ is open, $h$ is monotone and $f_n = h \circ g$. Let $D = h^{-1}(\{y_1, y_2\})$. Then $D$ is a subcontinuum of $Z$. It is easy to see that $g^{-1}(D) = \langle f^{-1}(y_1), f^{-1}(y_2) \rangle_n$. Let $E = f^{-1}(y_2)$. + +Note that $\langle f^{-1}(y_1), f^{-1}(y_2) \rangle_n = \langle K \cup L, E \rangle_n = \langle K, E \rangle_n \cup \langle L, E \rangle_n \cup \langle K, L, E \rangle_n$ and $\langle K, L, E \rangle_n \neq \emptyset$ ($n \ge 3$). Thus $\langle K, L, E \rangle_n$ is a nonempty open and closed subset of $g^{-1}(D)$ (realative to the topology of $g^{-1}(D)$). Choose points $x_1 \in K$ and $x_3 \in L$. Let $\mathcal{C}$ be the component of $g^{-1}(D)$ containing the element $\{x_1, x_2, x_3\}$. \ No newline at end of file diff --git a/samples/texts/7662009/page_1.md b/samples/texts/7662009/page_1.md new file mode 100644 index 0000000000000000000000000000000000000000..7e000de04a902b27ccf6acc47250b1c66e4e9bcc --- /dev/null +++ b/samples/texts/7662009/page_1.md @@ -0,0 +1,28 @@ +Constructive Dimension and +Weak Truth-Table Degrees + +Laurent Bienvenu¹, David Doty² and Frank Stephan³* + +¹ Laboratoire d'Informatique Fondamentale de Marseille, Université de Provence, +39 rue Joliot-Curie, 13453 Marseille Cedex 13, France. +laurent.bienvenu@lif.univ-mrs.fr + +² Department of Computer Science, Iowa State University, Ames, IA 50011, USA. +ddoty@iastate.edu + +³ School of Computing and Department of Mathematics, National University of Singapore, 2 Science Drive 2, Singapore 117543, Republic of Singapore. +fstephan@comp.nus.edu.sng + +**Abstract.** This paper examines the constructive Hausdorff and packing dimensions of weak truth-table degrees. The main result is that every infinite sequence $S$ with constructive Hausdorff dimension $\dim_H(S)$ and constructive packing dimension $\dim_P(S)$ is weak truth-table equivalent to a sequence $R$ with $\dim_H(R) \ge \dim_H(S)/\dim_P(S) - \epsilon$, for arbitrary $\epsilon > 0$. Furthermore, if $\dim_P(S) > 0$, then $\dim_P(R) \ge 1-\epsilon$. The reduction thus serves as a *randomness extractor* that increases the algorithmic randomness of $S$, as measured by constructive dimension. + +A number of applications of this result shed new light on the constructive dimensions of wtt degrees (and, by extension, Turing degrees). A lower bound of $\dim_H(S)/\dim_P(S)$ is shown to hold for the wtt degree of any sequence $S$. A new proof is given of a previously-known zero-one law for the constructive packing dimension of wtt degrees. It is also shown that, for any *regular* sequence $S$ (that is, $\dim_H(S) = \dim_P(S)$) such that $\dim_H(S) > 0$, the wtt degree of $S$ has constructive Hausdorff and packing dimension equal to 1. + +Finally, it is shown that no single Turing reduction can be a *universal* constructive Hausdorff dimension extractor. + +**Keywords:** constructive dimension, weak truth-table, extractor, degree, randomness + +# 1 Introduction + +Hausdorff [5] initiated the study of dimension as a general framework to define the size of subsets of metric spaces. Recently this framework had been effec- tivized; Lutz [9] gives an overview of this historical development. Furthermore, Lutz [8, Section 6] reviews early results that anticipated the effectivization of + +* Corresponding author. \ No newline at end of file diff --git a/samples/texts/7662009/page_10.md b/samples/texts/7662009/page_10.md new file mode 100644 index 0000000000000000000000000000000000000000..680e794b22b21d5c861e25902857b1ce00bcc8e7 --- /dev/null +++ b/samples/texts/7662009/page_10.md @@ -0,0 +1,41 @@ +2. David Doty. Every sequence is decompressible from a random one. In *Logical Approaches to Computational Barriers, Proceedings of the Second Conference on Computability in Europe*, Springer Lecture Notes in Computer Science, volume 3988 of *Computability in Europe*, Swansea, UK, July 2006, pp. 153–162. + +3. David Doty. Dimension extractors and optimal decompression. *Theory of Computing Systems*, to appear. Special issue of selected papers from Computability in Europe 2006. + +4. Lance Fortnow, John M. Hitchcock, Pavan Aduri, N. Variyam Vinodchandran and Fengming Wang. Extracting Kolmogorov complexity with applications to dimension zero-one laws. In *Proceedings of the 33rd International Colloquium on Automata, Languages and Programming*, Springer LNCS, 4051:335–345, 2006. + +5. Felix Hausdorff. Dimension und äusseres Mass. *Mathematische Annalen*, 79:157–179, 1919. + +6. Ming Li and Paul M. B. Vitányi. *An Introduction to Kolmogorov Complexity and its Applications*. Springer-Verlag, Berlin, 1997. Second Edition. + +7. Jack H. Lutz. Dimension in complexity classes. *SIAM Journal on Computing*, 32:1236–1259, 2003. + +8. Jack H. Lutz. The dimensions of individual strings and sequences. *Information and Computation*, 187:49–79, 2003. + +9. Jack H. Lutz. Effective fractal dimensions (invited lecture at the International Conference on Computability and Complexity in Analysis, Cincinnati, OH, August 28-30, 2003), *Mathematical Logic Quarterly* 51, pp. 62–72, 2005. + +10. Elvira Mayordomo. A Kolmogorov complexity characterization of constructive Hausdorff dimension. *Information Processing Letters*, 84(1):1–3, 2002. + +11. André Nies and Jan Reimann. A lower cone in the wtt degrees of non-integral effective dimension. To appear in *Proceedings of IMS workshop on Computational Prospects of Infinity*, Singapore. Earlier version appeared as Technical Report 63, Workgroup Mathematical Logic and Theoretical Computer Science, University of Heidelberg, 2005. + +12. Andre Nies, Frank Stephan and Sebastian A. Terwijn. Randomness, relativization and Turing degrees. The Journal of Symbolic Logic 70:515–535, 2005. + +13. Piergiorgio Odifreddi. *Classical recursion theory*, volume 125 of *Studies in Logic and the Foundations of Mathematics*. North-Holland, 1989. + +14. Jan Reimann. *Computability and fractal dimension*. Doctoral thesis, Heidelberg, 2005. + +15. Jan Reimann and Theodore Slaman. *Randomness, Entropy and Reducibility*. Manuscript, 2005. + +16. Boris Ya. Ryabko. Coding of combinatorial sources and Hausdorff dimension. *Soviet Mathematics Doklady*, 30:219–222, 1984. + +17. Boris Ya. Ryabko. Noiseless coding of combinatorial sources. *Problems of Information Transmission*, 22:170–179, 1986. + +18. Ronen Shaltiel. Recent developments in explicit constructions of extractors. *Bulletin of the EATCS*, 77:67–95, 2002. + +19. Robert I. Soare. *Recursively Enumerable Sets and Degrees*. Springer-Verlag, Berlin, 1987. + +20. Frank Stephan. Hausdorff-dimension and weak truth-table reducibility. Technical Report TR52/05, School of Computing, National University of Singapore, 2005. + +21. Dennis Sullivan. Entropy, Hausdorff measures old and new, and limit sets of geometrically finite Kleinian groups. *Acta Mathematica*, 153:259–277, 1984. + +22. Claude Tricot. Two definitions of fractional dimension. *Mathematical Proceedings of the Cambridge Philosophical Society*, 91:57–74, 1982. \ No newline at end of file diff --git a/samples/texts/7662009/page_2.md b/samples/texts/7662009/page_2.md new file mode 100644 index 0000000000000000000000000000000000000000..7924da4cf44a86f9d100651086179b22a0431943 --- /dev/null +++ b/samples/texts/7662009/page_2.md @@ -0,0 +1,35 @@ +Hausdorff dimension. *Constructive Hausdorff dimension* was defined by Lutz [8] to study effective dimension at the level of computability theory. Intuitively, given an infinite binary sequence *S* – interpreted as a language or decision problem – the constructive Hausdorff dimension dimH(*S*) of *S* is a real number in the interval [0,1] indicating the density of algorithmic randomness of the sequence. The constructive Hausdorff dimension of a class *C* of sequences is the supremum of the dimensions of individual sequences in *C*. For many classes *C* of interest in computability theory, the problem of determining the constructive Hausdorff dimension of *C* remains open. + +Reimann [14] investigated in particular whether there are degrees of frac- +tional constructive Hausdorff dimension. Stated in terms of individual sequences, +Reimann asked which reducibilities (such as Turing, many-one, weak truth-table, +etc.) are capable of increasing the constructive Hausdorff dimension of a se- +quence. We call such a reduction a *dimension extractor*, since its purpose bears +a resemblance to that of the *randomness extractors* of computational complexity +[18], which are algorithms that turn a source of weak randomness (a probabilis- +tic source with low entropy) into a source of strong randomness (a source with +high entropy). Viewing a sequence with positive, but still fractional, constructive +Hausdorff dimension as a weak source of randomness, Reimann essentially asked +whether such randomness can be extracted via a reduction to create a sequence +with dimension closer to 1. If such extraction is *not* possible for some sequence *S*, +this indicates that the degree of *S* under the reduction has fractional dimension. + +A number of negative results for dimension extractors are known. Reimann and Terwijn [14, Theorem 3.10] proved that there are many-one and bounded truth-table degrees with constructive Hausdorff dimension strictly between 0 and 1. Later Reimann and Slaman [15] extended this result to truth-table degrees. Stephan [20] showed that there is a relativized world in which there exists a wtt degree of constructive Hausdorff dimension between $\frac{1}{4}$ and $\frac{1}{2}$. Furthermore, Nies and Reimann [11] obtained a non-relativized variant of this result and constructed, for each rational $\alpha$ between 0 and 1, a wtt degree of constructive Hausdorff dimension $\alpha$. + +Doty [3] attempted positive results by considering the interaction between +constructive Hausdorff dimension and *constructive packing dimension* [1], a dual +quantity that is a constructive effectivization of classical packing dimension [21, +22], another widely-studied fractal dimension. The constructive packing dimen- +sion dimP(S) of a sequence S always obeys + +$$ +0 \leq \dim_H(S) \leq \dim_P(S) \leq 1, +$$ + +with each inequality tight in the strong sense that there are sequences *S* in which +dimH(*S*) and dimP(*S*) may take on any values obeying the stated constraint. +Doty showed that every sequence *S* with dimH(*S*) > 0 is Turing equivalent to +a sequence *R* with dimP(*R*) ≥ 1 − ε, for arbitrary ε > 0. This implies that the +constructive packing dimension of the Turing degree of any sequence *S* with +dimH(*S*) > 0 is equal to 1. Unfortunately, since dimH(*R*) ≤ dimP(*R*), this Tur- +ing reduction constitutes a weaker example of a dimension extractor than that \ No newline at end of file diff --git a/samples/texts/7662009/page_3.md b/samples/texts/7662009/page_3.md new file mode 100644 index 0000000000000000000000000000000000000000..3e8a22aa3987d3f69af2106e45cd5c6606935242 --- /dev/null +++ b/samples/texts/7662009/page_3.md @@ -0,0 +1,42 @@ +sought by Reimann and it tells us nothing of the constructive dimensions of +arbitrary Turing degrees. + +We obtain in the current paper stronger positive results for constructive +dimension extractors. Our main result, in section 2, is that, given any infinite +sequence $S$ and $\epsilon > 0$, there exists $R \equiv_{\text{wtt}} S$ such that $\dim_H(R) \ge \frac{\dim_H(S)}{\dim_P(S)} - \epsilon$ +and, if $\dim_P(S) > 0$, then $\dim_P(R) \ge 1 - \epsilon$. This has immediate consequences +for the dimensions of weak truth-table degrees: + +- Given any sequence $S$, $\dim_H(\deg_{\text{wtt}}(S)) \ge \frac{\dim_H(S)}{\dim_P(S)}$. + +- If $\dim_P(S) > 0$, then $\dim_P(\deg_{\text{wtt}}(S)) = 1$, implying that every wtt degree has constructive packing dimension 0 or 1. + +- Given any *regular* sequence *S* such that $\dim_H(S) > 0$, $\dim_H(\deg_{\text{wtt}}(S)) = 1$, where a sequence *S* is regular if it satisfies $\dim_H(S) = \dim_P(S)$. + +In section 3, we use Theorem 2.1 to show that, for every $\alpha > 0$, there is no +universal Turing reduction that is guaranteed to extract dimension from all +sequences of dimension at least $\alpha$. + +Before going into the details of the results, we introduce the concepts and +notations formally. + +**Notation.** We refer the reader to the textbooks of Li and Vitányi [6] for an +introduction to Kolmogorov complexity and algorithmic information theory and +of Odifreddi [13] and Soare [19] for an introduction to computability theory. +Although we follow mainly the notation in these books, we nevertheless want to +remind the reader on the following definitions, either for the readers' convenience +or because we had to choose between several common ways of denoting the +corresponding mathematical objects. + +All logarithms are base 2. $\mathbb{N}$ denotes the set $\{0, 1, 2, 3, ...\}$ of the natural numbers including 0. $\{0, 1\}^*$ denotes the set of all finite, binary strings. For all $x \in \{0, 1\}^*$, $|x|$ denotes the length of $x$. $\lambda$ denotes the empty string. $\mathbf{C} = \{0, 1\}^\infty$ denotes the Cantor space, the set of all infinite, binary sequences. For $x \in \{0, 1\}^*$ and $y \in \{0, 1\}^* \cup \mathbf{C}$, $xy$ denotes the concatenation of $x$ and $y$, $x \sqsubseteq y$ denotes that $x$ is a prefix of $y$ (that is, there exists $u \in \{0, 1\}^* \cup \mathbf{C}$ such that $xu = y$) and $x \sqsubseteq y$ denotes that $x \sqsubseteq y$ and $x \neq y$. For $S \in \{0, 1\}^* \cup \mathbf{C}$ and $i, j \in \mathbb{N}$, $S[i]$ denotes the $i$th bit of $S$, with $S[0]$ being the leftmost bit, $S[i..j]$ denotes the substring consisting of the $i$th through $j$th bits of $S$ (inclusive), with $S[i..j] = \lambda$ if $i > j$. + +*Reductions and Compression.* Let $M$ be a Turing machine and $S \in \mathbf{C}$. We say $M$ +computes $S$ if $M$ on input $n \in \mathbb{N}$ (written $M(n)$), outputs the string $S[0..n-1]$. +We define an *oracle Turing machine* to be a Turing machine $M$ that can make +constant-time queries to an oracle sequence and we let OTM denote the set of +all oracle Turing machines. For $R \in \mathbf{C}$, we say $M$ operates *with oracle R* if, +whenever $M$ makes a query to index $n \in \mathbb{N}$, the bit $R[n]$ is returned. We write +$M^R$ to denote the oracle Turing machine $M$ with oracle $R$. + +Let $S, R \in C$ and $M \in \text{OTM}$. We say *S is Turing reducible to R via M* and +we write $S \le_T R$ via $M$, if $M^R$ computes $S$ (that is, if $M^R(n) = S[0 \dots n-1]$) \ No newline at end of file diff --git a/samples/texts/7662009/page_4.md b/samples/texts/7662009/page_4.md new file mode 100644 index 0000000000000000000000000000000000000000..83ca1da5427a8834de4f128bd48e8d8b2523bc6a --- /dev/null +++ b/samples/texts/7662009/page_4.md @@ -0,0 +1,40 @@ +for all $n \in \mathbb{N}$). In this case, write $R = M(S)$. We say $S$ is *Turing reducible to* $R$ and we write $S \leq_T R$, if there exists $M \in \text{OTM}$ such that $S \leq_T R$ via $M$. We say $S$ is *Turing equivalent to* $R$, and we write $S \equiv_T R$, if $S \leq_T R$ and $R \leq_T S$. The *Turing lower span of* $S$ is $\text{span}_T(S) = \{ R \in \mathbf{C} \mid R \leq_T S \}$ and the *Turing degree* of $S$ is $\text{deg}_T(S) = \{ R \in \mathbf{C} \mid R \equiv_T S \}$. + +Let $S, R \in \mathbf{C}$ and $M \in \text{OTM}$ such that $S \leq_T R$ via $M$. Let the notion $\#(M^R, S[0..n-1])$ denote the query usage of $M^R$ on $S[0..n-1]$, the number of bits of $R$ queried by $M$ when computing the string $S[0..n-1]$.⁴ We say $S$ is weak truth-table (wtt) reducible to $R$ via $M$ and we write $S \leq_{\text{wtt}} R$ via $M$, if $S \leq_T R$ via $M$ and there is a computable function $q: \mathbb{N} \to \mathbb{N}$ such that, for all $n \in \mathbb{N}$, $\#(M^R, S[0..n-1]) \leq q(n)$. Define $S \leq_{\text{wtt}} R$, $S \equiv_{\text{wtt}} R$, $\text{span}_{\text{wtt}}(S)$ and $\text{deg}_{\text{wtt}}(S)$ analogously to their counterparts for Turing reductions. Define + +$$ +\begin{align*} +\rho_M^-(S, R) &= \liminf_{n \to \infty} \frac{\#(M^R, S[0 \dots n-1])}{n}, \\ +\rho_M^+(S, R) &= \limsup_{n \to \infty} \frac{\#(M^R, S[0 \dots n-1])}{n}. +\end{align*} +$$ + +Viewing $R$ as a compressed version of $S$, $\rho_M^-(S, R)$ and $\rho_M^+(S, R)$ are respectively the best- and worst-case compression ratios as $M$ decompresses $R$ into $S$. Note that $0 \le \rho_M^-(S, R) \le \rho_M^+(S, R) \le \infty$. + +The following lemma is useful when one wants to compose two reductions: + +**Lemma 1.1.** [2] Let $S, S', S'' \in \mathbf{C}$ and $M_1, M_2 \in \text{OTM}$ such that $S' \leq_T S$ via $M_1$ and $S'' \leq_T S'$ via $M_2$. There exists $M \in \text{OTM}$ such that $S'' \leq_T S$ via $M$ and: + +$$ +\begin{align*} +\rho_M^+(S'', S) &\leq \rho_{M_2}^+(S'', S')\rho_{M_1}^+(S', S). \\ +\rho_M^-(S'', S) &\leq \rho_{M_2}^-(S'', S')\rho_{M_1}^-(S', S). \\ +\rho_M^-(S'', S) &\leq \rho_{M_2}^+(S'', S')\rho_{M_1}^-(S', S). +\end{align*} +$$ + +(The last bound is not explicitly stated in [2], but it holds for the same reason as the second one). + +For $S \in \mathbf{C}$, the lower and upper *Turing compression ratios* of *S* are respectively defined as + +$$ +\begin{align*} +\rho^{-}(S) &= \min_{\substack{R \in \mathbb{C} \\ M \in \text{OTM}}} \{\rho_{M}^{-}(S, R) \mid S \leq_{T} R \text{ via } M\}, \\ +\rho^{+}(S) &= \min_{\substack{R \in \mathbb{C} \\ M \in \text{OTM}}} \{\rho_{M}^{+}(S, R) \mid S \leq_{T} R \text{ via } M\}. +\end{align*} +$$ + +Doty [2] showed that the above minima exist. Note that $0 \le \rho^{-}(S) \le \rho^{+}(S) \le 1$. + +⁴ If we instead define $\#(M^R, S[0..n-1])$ to be the index of the rightmost bit of $R$ +queried by $M$ when computing $S[0..n-1]$, all results of the present paper still hold. \ No newline at end of file diff --git a/samples/texts/7662009/page_5.md b/samples/texts/7662009/page_5.md new file mode 100644 index 0000000000000000000000000000000000000000..08aee6d44abb77c80dffbcb2ffa497256d7ad61c --- /dev/null +++ b/samples/texts/7662009/page_5.md @@ -0,0 +1,37 @@ +*Constructive Dimension*. Lutz [8] gives an introduction to the theory of constructive dimension. We use Mayordomo's characterization [10] of the constructive dimensions of sequences. For all $S \in C$, the *constructive Hausdorff dimension* and the *constructive packing dimension* of S are respectively defined as + +$$ \dim_H(S) = \liminf_{n \to \infty} \frac{C(S[0 \dots n-1])}{n} \quad \text{and} \quad \dim_P(S) = \limsup_{n \to \infty} \frac{C(S[0 \dots n-1])}{n}, $$ + +where $C(w)$ denotes the *Kolmogorov complexity* of $w \in \{0, 1\}^*$ (see [6]). If $\dim_H(S) = \dim_P(S)$, we say S is a *regular sequence*. Doty [2] showed that, for all $S \in C$, $\rho^-(S) = \dim_H(S)$ and $\rho^+(S) = \dim_P(S)$. + +For all $X \subseteq C$, the *constructive Hausdorff dimension* and the *constructive packing dimension* of X are respectively defined as + +$$ \dim_H(X) = \sup_{S \in X} \dim_H(S) \text{ and } \dim_P(X) = \sup_{S \in X} \dim_P(S). $$ + +## 2 Constructive Dimension Extractors + +Nies and Reimann [11] showed that wtt reductions cannot always extract constructive dimension. + +**Theorem 2.1 (Nies and Reimann [11]).** For every rational number $\alpha$ with $0 < \alpha < 1$, there exists a sequence $S \in C$ such that, for all wtt reductions $M$, $\dim_H(M(S)) \leq \dim_H(S) = \alpha$. + +Ryabko [16, 17] discovered the next theorem. + +**Theorem 2.2 (Ryabko [16, 17]).** For all $S \in C$ and $\delta > 0$, there exists $R \in C$ and $N_d \in \text{OTM such that}$ + +1. $S \leq_T R$ via $N_d$ and $R \leq_T S$. + +2. $\rho_{N_d}(S, R) \leq \dim_H(S) + \delta$. + +The following theorem was shown in [2]. + +**Theorem 2.3 (Doty [2]).** There is an oracle Turing machine $M_d$ such that, for all $S \in C$, there exists $R \in C$ such that + +1. $S \leq_{\text{wtt}} R$ via $M_d$. + +2. $\rho_{M_d}^{-}(S, R) = \dim_H(S)$. + +3. $\rho_{M_d}^{+}(S, R) = \dim_P(S)$. + +The following theorem, which is similar to Ryabko's Theorem 2.2, shows that the decoding machine $M_d$ of Theorem 2.3 can also be reversed if the compression requirements are weakened. + +**Theorem 2.4.** Let $M_d$ be the oracle Turing machine from Theorem 2.3. For all $\delta > 0$, there is an oracle Turing machine $M_e$ such that, for all $S \in C$, there is a sequence $R' \in C$ such that \ No newline at end of file diff --git a/samples/texts/7662009/page_6.md b/samples/texts/7662009/page_6.md new file mode 100644 index 0000000000000000000000000000000000000000..0c52f101d8eda04b9762b8e4ef1c8588dcc56067 --- /dev/null +++ b/samples/texts/7662009/page_6.md @@ -0,0 +1,35 @@ +1. $S \le_{\text{wtt}} R'$ via $M_d$ and $R' \le_{\text{wtt}} S$ via $M_e$. + +2. $\rho_{M_d}^+(S, R') \le \dim_P(S) + \delta$. + +*Proof.* Let $S \in \mathbb{C}$ and choose $R$ for $S$ as in Theorem 2.3. Let $\delta > 0$ and let $D \in (\dim_P(S), \dim_P(S) + \delta)$ be rational. By Theorem 2.3, there exists $n_0 \in \mathbb{N}$ such that, for all $n \ge n_0$, $\#(M_d^{R'}, S[0..n-1]) < Dn$. + +$M_e$ will make use of the oracle Turing machine $M_d$. The proof of Theorem 2.3 in [2] shows that $M_d$ has the following useful properties. First, write $S = s_1s_2s_3...$ and $R = r_1r_2r_3...$, where each $s_i, r_i \in \{0,1\}^*$ are blocks such that $|s_i| = i$ and $|r_i| \le |s_i| + o(|s_i|)$. + +- $M_d$ computes $S$ from $R$ in stages, where it outputs the block $s_i$ on the $i^{\text{th}}$ stage. + +- Assuming that $M_d$ has already computed $s_1...s_i$, $M_d$ uses only the block $r_{i+1}$ and the prefix $s_1...s_i$ to compute $s_{i+1}$. + +Because of these properties, we can use $M_d$ to search for a sequence $R'$ that satisfies requirements 1 and 2 in the statement of Theorem 2.4. By Theorem 2.3, $R$ satisfies these requirements, so such an $R'$ will exist. By the above two properties of $M_d$, if we find a string $r' = r'_1...r'_i$ that satisfies requirements 1 and 2 (in the sense described below), we will always be able to find an extension $r'' = r'_{i+1}...r'_{j}$ (for some $j > i$) such that $r'r''$ continues to satisfy the requirements. It will not matter if $r' \not\subset R$, since $M_d$ does not use the portion of $R$ coming before block $r_{i+1}$ to compute $s_{i+1}$. In other words, to reverse the computation of $M_d^{R'}$ and compute $R'$ from $S$, we don't need to find the $R$ from Theorem 2.3; we need only to find an $R'$ that is “close enough”. + +Define the oracle Turing machine $M_e$ with oracle $S \in \mathbb{C}$ as follows. Let $i \in \mathbb{N}$ and assume inductively that the prefix $r' = r'_1...r'_i \subset R'$ has been computed, so that, letting $|s_1...s_i| = n$, + +(a) $M_d^{r'}(n)$ outputs $S[0..n-1]$, + +(b) for all $k$ with $n_0 \le k \le n$, $\#(M_d^{r', S[0..k-1]}) \le Dk$. + +Let $N$ be the smallest integer greater than $2^n$ such that $S[0..N-1] = s_1...s_{i'}$, for some $i' \in \mathbb{N}$. $M_e^S$ searches all strings $r'' \in \{0,1\}^N$ until it finds one that satisfies + +(a) $M_d^{r'r''}(N)$ outputs $S[0..N-1]$, + +(b) for all $k$ with $n_0 \le k \le N$, $\#(M_d^{r', S[0..k-1]}) \le Dk$. + +$M_e^S$ then outputs $r''$ and saves it for the computation of the next extension of $R'$. +By the existence of $R$ from Theorem 2.3 and a simple induction on the stages of +computation that $M_e$ performs and the fact that $N$ is asymptotically larger than +$n$, $M_e^S$ will always be able to find such an $r''$. Therefore, in the output sequence +$R'$, for all but finitely many $N$, requirement (b) will be satisfied. Therefore the +sequence $R'$ will satisfy the two requirements of Theorem 2.4. + +Finally, for any $n$, $M_e(n)$ makes no more than $2^{2n}$ queries to $S$ and therefore +$M_e$ computes a wtt reduction. $\square$ \ No newline at end of file diff --git a/samples/texts/7662009/page_7.md b/samples/texts/7662009/page_7.md new file mode 100644 index 0000000000000000000000000000000000000000..c9a070b18a606ea03a7ed0558022d1aade8e631b --- /dev/null +++ b/samples/texts/7662009/page_7.md @@ -0,0 +1,25 @@ +The following theorem is the main result of this paper. It states that constructive packing dimension can be almost optimally extracted from a sequence of positive packing dimension, while at the same time, constructive Hausdorff dimension is *partially* extracted from this sequence, if it has positive Hausdorff dimension and packing dimension less than 1. The machine $M_e$ from Theorem 2.4 serves as the extractor. Intuitively, this works because $M_e$ compresses the sequence $S$ into the sequence $R$. Since $R$ is a compressed representation of $S$, $R$ must itself be more incompressible than $S$. However, because dimension measures the compressibility of a sequence, this means that the constructive dimensions $R$ are greater than those of $S. + +**Theorem 2.5.** For all $\epsilon > 0$ and $S \in \mathbf{C}$ such that $\dim_{\mathbf{P}}(S) > 0$, there exists $R \equiv_{\text{wtt}} S$ such that $\dim_{\mathbf{P}}(R) \ge 1 - \epsilon$ and $\dim_{\mathbf{H}}(R) \ge \frac{\dim_{\mathbf{H}}(S)}{\dim_{\mathbf{P}}(S)} - \epsilon$. + +*Proof.* Let $\epsilon > 0$ and $S \in \mathbf{C}$ such that $\dim_{\mathbf{P}}(S) > 0$. Let $\delta > 0$ and $R', M_d$ be as in Theorem 2.4. Let $R'' \in \mathbf{C}$ and $M \in \text{OTM}$ such that $R' \le_T R''$ via $M$, $\rho_M^-(R', R'') = \dim_{\mathbf{H}}(R')$ and $\rho_M^+(R', R'') = \dim_{\mathbf{P}}(R')$ (the existence of $M$ and $R''$ is asserted by Theorem 2.3). By Lemma 1.1, we have + +$$ \rho^+(S) \le \rho_{M_d}^+(S, R')\rho_M^+(R', R''), $$ + +which, by construction of $R'$ and $R''$ implies $\rho^+(S) \le (\dim_{\mathbf{P}}(S) + \delta)\dim_{\mathbf{P}}(R')$. Since $\rho^+(S) = \dim_{\mathbf{P}}(S)$, + +$$ \dim_{\mathbf{P}}(R') \ge \frac{\dim_{\mathbf{P}}(S)}{\dim_{\mathbf{P}}(S) + \delta}. $$ + +Moreover (by Lemma 1.1 again), $\rho^-(S) \le \rho_{M_d}^+(S, R')\rho_M^-(R', R'')$, which, by construction of $R'$ and $R''$, implies $\rho^-(S) \le (\dim_{\mathbf{P}}(S) + \delta)\dim_{\mathbf{H}}(R')$. Since $\rho^-(S) = \dim_{\mathbf{H}}(S)$, + +$$ \dim_{\mathbf{H}}(R') \ge \frac{\dim_{\mathbf{H}}(S)}{\dim_{\mathbf{P}}(S) + \delta}. $$ + +Taking $\delta$ small enough, we get by the above inequalities: $\dim_{\mathbf{P}}(R) \ge 1 - \epsilon$ and $\dim_{\mathbf{H}}(R) \ge \frac{\dim_{\mathbf{H}}(S)}{\dim_{\mathbf{P}}(S)} - \epsilon$. $\square$ + +Theorem 2.5 has a number of applications, stated in the following corollaries, which shed light on the constructive dimensions of sequences, spans and degrees. + +**Corollary 2.6.** Let $S \in \mathbf{C}$ and assume that $\dim_{\mathbf{H}}(S) > 0$. Then $\dim_{\mathbf{H}}(\deg_{\mathbf{T}}(S))$, $\dim_{\mathbf{H}}(\deg_{\text{wtt}}(S))$, $\dim_{\mathbf{H}}(\operatorname{span}_{\mathbf{T}}(S))$ and $\dim_{\mathbf{H}}(\operatorname{span}_{\text{wtt}}(S))$ are all at least $\frac{\dim_{\mathbf{H}}(S)}{\dim_{\mathbf{P}}(S)}$. + +We obtain a zero-one law for the constructive packing dimension of Turing and weak truth-table lower spans and degrees. + +**Corollary 2.7.** For all $S \in \mathbf{C}$, the dimensions $\dim_{\mathbf{P}}(\deg_{\mathbf{T}}(S))$, $\dim_{\mathbf{P}}(\operatorname{span}_{\mathbf{T}}(S))$, $\dim_{\mathbf{P}}(\deg_{\text{wtt}}(S))$ and $\dim_{\mathbf{P}}(\operatorname{span}_{\text{wtt}}(S))$ are each either 0 or 1. \ No newline at end of file diff --git a/samples/texts/7662009/page_8.md b/samples/texts/7662009/page_8.md new file mode 100644 index 0000000000000000000000000000000000000000..068ec31f32f3428815671ceb19b804ffa4752c8b --- /dev/null +++ b/samples/texts/7662009/page_8.md @@ -0,0 +1,46 @@ +Therefore Theorem 2.1, establishing the existence of wtt degrees of fractional +constructive Hausdorff dimension, does not extend to constructive packing di- +mension. Because of Theorem 2.1, we must settle for more conditional results +for constructive Hausdorff dimension. We focus attention on regular sequences. + +**Corollary 2.8.** For all $\epsilon > 0$ and all regular $S \in \mathbf{C}$ such that $\dim_{\mathbb{H}}(S) > 0$, there exists $R \equiv_{\mathrm{wtt}} S$ such that $\dim_{\mathbb{H}}(R) \ge 1 - \epsilon$. + +**Corollary 2.9.** For all *regular* $S \in \mathbf{C}$ such that $\dim_{\mathbb{H}}(S) > 0$, + +$$ +\begin{align*} +\dim_H(\operatorname{span}_{\text{wtt}}(S)) &= \dim_H(\deg_{\text{wtt}}(S)) = \dim_H(\operatorname{span}_T(S)) = \dim_H(\deg_T(S)) = \\ +&\dim_P(\operatorname{span}_{\text{wtt}}(S)) = \dim_P(\deg_{\text{wtt}}(S)) = \dim_P(\operatorname{span}_T(S)) = \dim_P(\deg_T(S)) = 1. +\end{align*} +$$ + +It remains open whether every Turing lower span or degree of positive construc- +tive Hausdorff dimension contains a regular sequence of positive constructive +Hausdorff dimension. If so, this would imply a zero-one law for constructive +Hausdorff dimension similar to Corollary 2.7. + +We note that the zero-one law for the constructive packing dimension of +Turing and wtt lower spans and degrees also follows from the following theo- +rem due to Fortnow, Hitchcock, Pavan, Vinodchandran and Wang [4], giving +a polynomial-time extractor for constructive packing dimension. For R, S ∈ C, +write R ≤T S if R ≤T S via an OTM that, on input n, runs in time polynomial +in n, and similarly for ≡Tp. + +**Theorem 2.10 ([4]).** For all *ε* > 0 and all *S* ∈ **C** such that dimP(*S*) > 0, there exists *R* ≡T*p* *S* such that dimP(*R*) ≥ 1 − *ε*. + +In fact, Theorem 2.10 holds for any resource-bounded packing dimension [7] +defined by Turing machines allowed at least polynomial space, which includes +constructive packing dimension as a special case, thus proving a spectrum of +zero-one packing dimension laws for various dimensions above polynomial space +and degrees and lower spans that are at least polynomial-time computable. + +3 Nonexistence of Universal Extractors + +The wtt reduction in the proof of Theorem 2.5 is uniform in the sense that, for all $\epsilon > 0$, there is a single wtt reduction $M$, universal for $\epsilon$ and all sequences $S$, such that $\dim_H(M(S)) \ge \dim_H(S)/\dim_P(S) - \epsilon$. + +While it remains open whether Turing reductions can extract constructive +Hausdorff dimension, we can show that there is no universal Turing reduction +that is guaranteed to increase – to a fixed amount – the dimension of all sequences +of sufficiently large dimension. + +**Theorem 3.1.** For every Turing reduction *M* and all reals α, β with 0 < α < β < 1, there exists *S* ∈ ℂ with $\dim_{{\mathbb H}}({S}) \ge \alpha$ such that ${M}(S)$ does not exist or $\dim_{{\mathbb H}}({M}(S)) < \beta$. \ No newline at end of file diff --git a/samples/texts/7662009/page_9.md b/samples/texts/7662009/page_9.md new file mode 100644 index 0000000000000000000000000000000000000000..ee9b4d2b426282ff3c99858e2d1bda50d5e2fa5f --- /dev/null +++ b/samples/texts/7662009/page_9.md @@ -0,0 +1,17 @@ +*Proof.* For this proof, it will be convenient to say that $R \leq_T S$ via $M$ if $M^S(n)$ outputs $R[n]$, rather than $R[0..n-1]$, bearing in mind that both definitions of a Turing reduction are equivalent. + +Suppose for the sake of contradiction that there exist reals $\alpha, \beta$ with $0 < \alpha < \beta < 1$ and a Turing reduction $M$ such that, for all $S \in C$ satisfying $\dim_H(S) \ge \alpha$, then $\dim_H(R) \ge \beta$, where $R = M(S)$. Fix rationals $\alpha', \gamma$ such that $\alpha < \alpha' < \gamma < \beta$. We will convert $M$ into a truth-table reduction $N$ (a reduction that halts on all oracles, which is also a wtt reduction) that guarantees the slightly weaker condition that if $\dim_H(S) > \alpha'$, then $\dim_H(N(S)) \ge \beta$. Then for any $S \in C$ such that $\dim_H(S) = \gamma > \alpha'$, it follows that $\dim_H(N(S)) \ge \beta > \gamma = \dim_H(S)$, which contradicts Theorem 2.1. + +On input $n \in \mathbb{N}$ and with oracle sequence $S$, $N^S(n)$ simulates $M^S(n)$. In parallel, for all integers $m > n$, $N$ searches for a program of length at most $\alpha'm$ computing $S[0..m-1]$. If $N$ finds such a program before the simulation of $M^S(n)$ terminates, then $N$ outputs 0. If instead the simulation of $M^S(n)$ halts before such a short program is found, then $N$ outputs $R[n]$, the output bit of $M^S(n)$. + +If $\dim_H(S) < \alpha'$, then for infinitely many $m \in \mathbb{N}$, $C(S[0..m-1]) \le \alpha'm$. Therefore $N^S$ halts, although the output sequence $N(S)$ may contain a lot of 0's, which is acceptable because we do not care what $N$ outputs if $\dim_H(S) < \alpha'$. + +If $\dim_H(S) \ge \alpha'$, then $M^S$ is guaranteed to halt and to compute $R$ such that $\dim_H(R) \ge \beta$. Therefore $N^S$ halts. If $\dim_H(S) = \alpha'$, then once again, we do not care what $N$ outputs. If $\dim_H(S) > \alpha'$, then only finitely many $m$ satisfy $C(S[0..m-1]) \le \alpha'm$. Therefore the parallel search for short programs will never succeed once $N$ begins checking only prefixes of $S$ of sufficiently large length. This means that from that point on, $N$ will simulate $M$ exactly, computing a sequence $R'$ that is a finite variation of $R$. Since dimension is unchanged under finite variations, $\dim_H(R') = \dim_H(R) \ge \beta$. $\square$ + +Theorem 3.1 tells us that, contrary to the proofs of Theorems 2.4 and 2.5, any extractor construction for Turing reductions must make use of some property of the sequence beyond a simple bound on its dimension. + +**Acknowledgments.** We thank Joe Miller for assistance with the proof of Theorem 2.4, as well as John Hitchcock, Jan Reimann and André Nies for their insightful comments. We also thank the American Institute of Mathematics which generously invited us to the Workshop on Effective Randomness; this paper is a result of a workgroup discussing open questions during this workshop. Besides the American Institute of Mathematics, we would also like to thank the organizers Denis Hirschfeldt and Joe Miller of the workshop as well as the participants who discussed this research topic with us. + +**References** + +1. Krishna Athreya, John Hitchcock, Jack H. Lutz and Elvira Mayordomo. Effective strong dimension, algorithmic information and computational complexity. *SIAM Journal on Computing*. To appear. \ No newline at end of file diff --git a/samples/texts/821833/page_1.md b/samples/texts/821833/page_1.md new file mode 100644 index 0000000000000000000000000000000000000000..78dee9d6c9e65f04ef4291ffc0a4e3908b9c6ac9 --- /dev/null +++ b/samples/texts/821833/page_1.md @@ -0,0 +1,28 @@ +The generalized 3-connectivity of Lexicographic product +graphs + +Xueliang Li, Yaping Mao + +► To cite this version: + +Xueliang Li, Yaping Mao. The generalized 3-connectivity of Lexicographic product graphs. Discrete Mathematics and Theoretical Computer Science, DMTCS, 2014, Vol. 16 no. 1 (in progress) (1), pp.339-353. hal-01179222 + +HAL Id: hal-01179222 + +https://hal.inria.fr/hal-01179222 + +Submitted on 22 Jul 2015 + +**HAL** is a multi-disciplinary open access +archive for the deposit and dissemination of sci- +entific research documents, whether they are pub- +lished or not. The documents may come from +teaching and research institutions in France or +abroad, or from public or private research centers. + +L'archive ouverte pluridisciplinaire **HAL**, est +destinée au dépôt et à la diffusion de documents +scientifiques de niveau recherche, publiés ou non, +émanant des établissements d'enseignement et de +recherche français ou étrangers, des laboratoires +publics ou privés. \ No newline at end of file diff --git a/samples/texts/821833/page_10.md b/samples/texts/821833/page_10.md new file mode 100644 index 0000000000000000000000000000000000000000..e5dcb161ea2a0087fe5977f54b5c4de9c0eff60b --- /dev/null +++ b/samples/texts/821833/page_10.md @@ -0,0 +1,15 @@ +**Case 2.** $d_{P_n \circ H}(x, y) = 1$ and $d_{P_n \circ H}(y, z) \geq 2$. + +We may assume that $x \in V(H(u_1))$, $y \in V(H(u_2))$, $z \in V(H(u_i))$ ($4 \leq i \leq n$). In the following argument, we can see that this assumption has no influence on the correctness of our proof. Let $y', z'$ be the vertices corresponding to $y, z$ in $H(u_1)$, $x', z''$ be the vertices corresponding to $x, z$ in $H(u_2)$ and $x'', y''$ be the vertices corresponding to $x, y$ in $H(u_i)$. Let $P' = u_2u_3\cdots u_i$. Clearly, $\kappa(P' \circ H) \geq m$. From Lemma 2.1, there is a $z, U$-fan in $P' \circ H$, where $U = V(H(u_2)) = \{(u_2, v_j) | 1 \leq j \leq m\}$. Thus there exist $m$ pairwise internally disjoint paths $P_1, P_2, \dots, P_m$ such that each $P_j$ ($1 \leq j \leq m$) is a path connecting $z$ and $(u_2, v_j)$. + +If $x, y', z'$ are distinct vertices in $H(u_1)$, without loss of generality, we let $\{x, y', z'\} = \{(u_1, v_j) | 1 \leq j \leq 3\}$ and $\{x', y, z''\} = \{(u_2, v_j) | 1 \leq j \leq 3\}$, then the trees $T_j = x(u_2, v_j) \cup (u_1, v_j)(u_2, v_j) \cup y(u_1, v_j) \cup P_j$ ($4 \leq j \leq m$) and $T_1 = xx' \cup x'y' \cup yy' \cup P_1$ and $T_2 = xy \cup P_2$ and $T_3 = xz'' \cup z'z'' \cup z'y \cup P_3$ are $m$ internally disjoint Steiner trees connecting $S$; see Fig. 5 (a). + +Fig. 5: Graphs for Case 2 of Lemma 2.6. + +Suppose that two of $x, y', z'$ are the same vertex in $H(u_1)$. If $y' = z'$, without loss of generality, let $\{x, y'\} = \{(u_1, v_1), (u_1, v_2)\}$ and $\{x', y\} = \{(u_2, v_1), (u_2, v_2)\}$, then the trees $T_j = x(u_2, v_j) \cup (u_1, v_j)(u_2, v_j) \cup y(u_1, v_j) \cup P_j$ ($3 \leq j \leq m$) and $T_1 = xy \cup P_1$ and $T_2 = xx' \cup x'y' \cup yy' \cup P_2$ are $m$ internally disjoint Steiner trees connecting $S$; see Fig. 5 (b). The other cases ($x = y'$ or $x = z'$) can be proved with similar arguments. + +Suppose that $x, y', z'$ are the same vertex in $H(u_1)$. Without loss of generality, let $x = (u_1, v_1)$, $y = (u_2, v_1)$ and $z = (u_i, v_i)$. Then the trees $T_j = x(u_2, v_j) \cup (u_1, v_j)(u_2, v_j) \cup y(u_1, v_j) \cup P_j$ ($2 \leq j \leq m$) and $T_1 = xy \cup P_1$ are $m$ internally disjoint Steiner trees connecting $S$. + +**Case 3.** $d_{P_n \circ H}(x, y) \geq 2$ and $d_{P_n \circ H}(y, z) \geq 2$. + +We may assume that $x \in V(H(u_i))$, $y \in V(H(u_j))$, $z \in V(H(u_k))$, where $i < j < k$, $|j - i| \geq 2$, $|k - j| \geq 2$, $1 \leq i \leq n - 5$, $3 \leq j \leq n - 2$ and $5 \leq k \leq n$. Let $P' = u_i, u_{i+1}, \dots, u_{j-1}$ and $P'' = u_{j+1}, u_{j+2}, \dots, u_k$. Then $P'$ and $P''$ are two paths of order at least 2. Since $\kappa(P' \circ H) \geq m$, from Lemma 2.2, if we add the vertex $y$ to $P' \circ H$ and join an edge from $y$ to each $(u_{j-1}, v_r)$ ($1 \leq r \leq m$), then $\kappa((P' \circ H) \lor \{y, V(H(u_{j-1}))\}) \geq m$. By the same reason, $\kappa((P'' \circ H) \lor \{y, V(H(u_{j+1}))\}) \geq m$. From Menger's Theorem, there exist $m$ internally disjoint paths connecting $x$ and $y$ in $(P' \circ H) \lor \{y, V(H(u_{j-1}))\}$, say $P'_1, P'_2, \dots, P'_m$. Also there exist $m$ internally disjoint paths connecting $y$ and $z$ in $(P' \circ H) \lor \{y, V(H(u_{j+1}))\}$, say $P''_1, P''_2, \dots, P''_m$. Note that the union of any path in $\{P'_i | 1 \leq i \leq m\}$ with any path in $\{P''_j | 1 \leq j \leq m\}$ is a Steiner tree connecting $S$. Then the trees $T_i = P'_i \cup P''_i$ ($1 \leq i \leq m$) are $m$ internally disjoint Steiner trees connecting $S$. □ \ No newline at end of file diff --git a/samples/texts/821833/page_11.md b/samples/texts/821833/page_11.md new file mode 100644 index 0000000000000000000000000000000000000000..3060d9308dbe361bbca2842dab8653065944d8e4 --- /dev/null +++ b/samples/texts/821833/page_11.md @@ -0,0 +1,19 @@ +From Lemmas 2.4, 2.5 and 2.6, we conclude that, for any $S \subseteq V(P_n \circ H)$, there exist $m$ internally disjoint Steiner trees connecting $S$, which implies that $\kappa_{P_n \circ H}(S) \ge m$. From the arbitrariness of $S$, we have $\kappa_3(P_n \circ H) \ge m = |V(H)|$. The proof of Proposition 2.3 is complete. + +**Remark 1.** As we have seen, for any $S = \{x, y, z\} \subseteq V(P_n \circ H)$, there exist $m$ internally disjoint Steiner trees connecting $S$ in $P_n \circ H$. One can see that only when $x, y, z$ belong to two copies $H(u_i)$ and $H(u_j)$ such that $u_i u_j \in E(P_n)$ ($1 \le i \ne j \le n$), we use at most one path in $H(u_i)$ or $H(u_j)$. This will be used in Subsection 2.3 to prove our main result Theorem 1.4. + +## 2.2 The Lexicographic product of a tree and a connected graph + +In this subsection, we consider the generalized 3-connectivity of the lexicographic product of a tree and a connected graph, which can be seen as a generalization of the result in the last subsection, and is a preparation of the next subsection. + +**Proposition 2.7** Let $H$ be a connected graph and $T$ be a tree with $n$ vertices. Then $\kappa_3(T \circ H) \ge |V(H)|$. Moreover, the bound is sharp. + +*Proof:* Set $V(T) = \{u_1, u_2, \dots, u_n\}$ and $V(H) = \{v_1, v_2, \dots, v_m\}$. It suffices to prove that $\kappa_{T \circ H}(S) \ge m$ for any $S = \{x, y, z\} \subseteq V(T \circ H)$, that is, for any $S = \{x, y, z\} \subseteq V(T \circ H)$ there exist $m$ internally disjoint Steiner trees connecting $S$ in $T \circ H$. Recall that $V(T \circ H) = \bigcup_{i=1}^n V(H(u_i))$. Without loss of generality, let $x \in V(H(u_i)), y \in V(H(u_j))$ and $z \in V(H(u_k))$. + +If $i, j$ and $k$ are not distinct integers, then there exists a path in $T$ containing $u_i, u_j$ and $u_k$, say $P_n$ (We may assume $j=k$. Then $y,z \in V(H(u_j)))$. From Proposition 2.3 and Remark 1, there exist $m$ internally disjoint Steiner trees connecting $S$, which occupies at most one path in $H(u_i)$ or $H(u_j)$ when $x,y,z$ belong to two copies $H(u_i)$ and $H(u_j)$ such that $u_i u_j \in E(P_n)$ ($1 \le i \ne j \le n$). We now suppose that $i, j$ and $k$ are distinct integers, and that there exists no path containing $u_i, u_j$ and $u_k$. Then there exists a subtree $T'$ in $T$ such that $d_{T'}(u_i) = d_{T'}(u_j) = d_{T'}(u_k) = 1$ and all the vertices of $T' \setminus \{u_i, u_j, u_k\}$ have degree 2 in $T'$ except for one vertex, say $u_1$ with $d_{T'}(u_1) = 3$. Clearly, there is a unique path $P_1$ connecting $u_1$ and $u_i$, a unique path $P_2$ connecting $u_1$ and $u_j$, a unique path $P_3$ connecting $u_1$ and $u_k$. It is clear that $P_1, P_2, P_3$ are three internally disjoint paths. Set $T'' = T' \setminus \{u_i, u_j, u_k\}$. Obviously, $T'' \circ H$ is $m$-connected. We have the following four cases to consider. + +**Case 1.** $d_{T'}(u_1, u_i) = d_{T'}(u_1, u_j) = d_{T'}(u_1, u_k) = 1$. Then the trees $T_r = x(u_1, v_r) \cup y(u_1, v_r) \cup z(u_1, v_r)$ ($1 \le r \le m$) are $m$ internally disjoint Steiner trees connecting $S$. + +**Case 2.** $d_{T'}(u_1, u_i) \ge 2$ and $d_{T'}(u_1, u_j) = d_{T'}(u_1, u_k) = 1$. Let $u_{i-1}$ be the vertex such that $u_{i-1}u_i \in E(T')$ and $u_{i-1}$ is closer to $u_1$ than $u_i$ in $P_1$. Recall that $T'' \circ H$ is $m$-connected. From Lemma 2.2, $(T'' \circ H) \cup \{x, V(H(u_{i-1}))\}$ is $m$-connected and hence there exists an $x,U$-fan in $(T'' \circ H) \cup \{x,V(H(u_1))\}$, where $U=V(H(u_1))$. So there exist $m$ internally disjoint paths $P_{1,1}, P_{1,2}, \dots, P_{1,m}$ connecting $x$ and $(u_1,v_1), (u_1,v_2), \dots, (u_1,v_m)$, respectively. Therefore, the trees $T_j = P_{1,j} \cup y(u_1,v_j) \cup z(u_1,v_j)$ ($1 \le j \le m$) are $m$ internally disjoint Steiner trees connecting $S$. + +**Case 3.** $d_{T'}(u_1, u_i) \ge 2$, $d_{T'}(u_1, u_j) \ge 2$ and $d_{T'}(u_1, u_k) = 1$. Let $u_{i-1}$ be the vertex such that $u_{i-1}u_i \in E(T')$ and $u_{i-1}$ is closer to $u_1$ than $u_i$ in $P_1$. Recall that $T'' \circ H$ is $m$-connected. From Lemma 2.2, $(T'' \circ H) \cup \{x,V(H(u_1))\}$ is $m$-connected and hence there exists an $x,U$-fan in $(T'' \circ H) \cup \{x,V(H(u_1))\}$. So there exist $m$ internally disjoint paths $P_{1,1}, P_{1,2}, \dots, P_{1,m}$ connecting $x$ and $(u_1,v_1), (u_1,v_2), \dots, (u_1,v_m)$, respectively (note that the paths $P_{1,1}, P_{1,2}, \dots, P_{1,m}$ \ No newline at end of file diff --git a/samples/texts/821833/page_12.md b/samples/texts/821833/page_12.md new file mode 100644 index 0000000000000000000000000000000000000000..e74358e10947c0ae905b96fe501bc4ed7edc64b0 --- /dev/null +++ b/samples/texts/821833/page_12.md @@ -0,0 +1,23 @@ +belong to $P_1 \circ H$). Similarly, there exist $m$ internally disjoint paths $P_{2,1}, P_{2,2}, \dots, P_{2,m}$ connecting $y$ and $(u_2, v_1), (u_2, v_2), \dots, (u_2, v_m)$, respectively (note that $P_{2,1}, P_{2,2}, \dots, P_{2,m}$ belong to $P_2 \circ H$). Therefore, the trees $T_r = P_{1,r} \cup P_{2,r} \cup z(u_1, v_r)$ ($1 \le r \le m$) are $m$ internally disjoint Steiner trees connecting $S$. **Case 4.** $d_{T'}(u_1, u_i) \ge 2$, $d_{T'}(u_1, u_j) \ge 2$ and $d_{T'}(u_1, u_k) \ge 2$. Similar to the above method, there exist $m$ internally disjoint paths $P_{1,1}, P_{1,2}, \dots, P_{1,m}$ connecting $x$ and $(u_1, v_1), (u_1, v_2), \dots, (u_1, v_m)$, $m$ internally disjoint paths $P_{2,1}, P_{2,2}, \dots, P_{2,m}$ connecting $y$ and $(u_2, v_1), (u_2, v_2), \dots, (u_2, v_m)$, and $m$ internally disjoint paths $P_{3,1}, P_{3,2}, \dots, P_{3,m}$ connecting $z$ and $(u_3, v_1), (u_3, v_2), \dots, (u_3, v_m)$, respectively. Therefore, the trees $T_r = P_{1,r} \cup P_{2,r} \cup P_{3,r}$ ($1 \le r \le m$) are $m$ internally disjoint Steiner trees connecting $S$. + +From the above arguments, for any $S = \{x,y,z\} \subseteq V(T \circ H)$, there exist $m$ internally disjoint Steiner trees connecting $S$, which implies that $\kappa_{T \circ H}(S) \ge m$. From the arbitrariness of $S$, we have $\kappa_3(T \circ H) \ge m = |V(H)|$. The proof is complete. $\square$ + +**Remark 2.** As we have seen, for any $S = \{x,y,z\} \subseteq V(T \circ H)$, there exist $m$ internally disjoint Steiner trees connecting $S$ in $T \circ H$. One can see that only when $x,y,z$ belong to two copies $H(u_i)$ and $H(u_j)$ such that $u_i u_j \in E(T)$ ($1 \le i,j \le n$), we use at most one path in $H(u_i)$ or $H(u_j)$. This will also be used in Subsection 2.3 to prove our main result Theorem 1.4. + +## 2.3 The Lexicographic product of two general graphs + +After the above preparations, we are ready to prove Theorem 1.4 in the following subsection. + +**Proof of Theorem 1.4:** Without loss of generality, we set $\kappa_3(G) = \ell$. Recall that $V(G) = \{u_1, u_2, \dots, u_n\}$, $V(H) = \{v_1, v_2, \dots, v_m\}$. From the definition of $\kappa_3(G \circ H)$, we need to prove that $\kappa_{G \circ H}(S) \ge \ell m$ for any $S = \{x,y,z\} \subseteq V(G \circ H)$. Furthermore, it suffices to show that there exist $\ell m$ internally disjoint trees connecting $S$ in $G \circ H$. Clearly, $V(G \circ H) = \bigcup_{i=1}^n V(H(u_i))$. Without loss of generality, let $x \in V(H(u_i))$, $y \in V(H(u_j))$ and $z \in V(H(u_k))$. We have the following three cases to consider. + +**Case 1.** The vertices $x,y,z$ belongs to the same $V(H(u_i))$ ($1 \le i \le n$). + +Without loss of generality, let $x,y,z \in V(H(u_1))$. Since $\delta(G) \ge \kappa_3(G) \ge \ell$, it follows that the vertex $u_1$ has $\ell$ neighbors in $G$, say $u_2, u_3, \dots, u_{\ell+1}$. Then the trees $T_{i,j} = x(u_i, v_j) \cup y(u_i, v_j) \cup z(u_i, v_j)$ ($2 \le i \le \ell+1$, $1 \le j \le m$) are $\ell m$ internally disjoint Steiner trees connecting $S$ in $G \circ H$, as desired. + +**Case 2.** only two vertices of $\{x,y,z\}$ belong to some copy $H(u_i)$ ($1 \le i \le n$). + +Without loss of generality, let $x,y \in H(u_1)$ and $z \in H(u_2)$. Since $\kappa(G) \ge \kappa_3(G) = \ell$, it follows that there exist $\ell$ internally disjoint paths connecting $u_1$ and $u_2$ in $G$, say $P_1,P_2,\dots,P_\ell$. Clearly, there exists at most one of $P_1,P_2,\dots,P_\ell$, say $P_1$, such that $P_1 = u_1 u_2$. From Remark 1, there exist $m$ internally disjoint Steiner trees connecting $S$ in $P_1 \circ H$, which occupies at most one path in $H(u_1)$ or $H(u_2)$. For $P_i$ ($2 \le i \le \ell$), there exist $m$ internally disjoint Steiner trees connecting $S$ in $P_i \circ H$, which occupies no edge in $H(u_j)$ ($1 \le j \le n$). So the total number of the internally disjoint Steiner trees connecting $S$ is $m + (\ell - 1)m = \ell m$, as desired. + +**Case 3.** The vertices $x,y,z$ are contained in distinct $H(u_i)$s. + +Without loss of generality, let $x \in H(u_1)$, $y \in H(u_2)$ and $z \in H(u_3)$. Since $\kappa_3(G) = \ell$, it follows that there exist $\ell$ internally disjoint Steiner trees connecting $\{u_1,u_2,u_3\}$ in $G$, say $T_1,T_2,\dots,T_\ell$. Observe that $\bigcup_{i=1}^\ell T_i$ is a subgraph of $G$ and $(\bigcup_{i=1}^\ell T_i) \circ H$ is a subgraph of $G \circ H$. If there exist $\ell m$ internally disjoint Steiner trees connecting $S$ in $(\bigcup_{i=1}^\ell T_i) \circ H$, then these trees are also $\ell m$ internally disjoint Steiner trees connecting $S$ in $G \circ H$. One can see that $(\bigcup_{i=1}^\ell T_i) \circ H = \bigcup_{i=1}^\ell (T_i \circ H)$, and for any two tree $T,T' \in \{T_i | 1 \le i \le \ell\}$ we have $(T \circ H) \cap (T' \circ H) = H(u_i) \cup H(u_j) \cup H(u_k)$. From Remark 2, for \ No newline at end of file diff --git a/samples/texts/821833/page_13.md b/samples/texts/821833/page_13.md new file mode 100644 index 0000000000000000000000000000000000000000..94954f20dc7515d16f87b627ac44a0d38161392a --- /dev/null +++ b/samples/texts/821833/page_13.md @@ -0,0 +1,31 @@ +each tree $T_i$ ($1 \le i \le l$) there exist $m$ internally disjoint Steiner trees connecting $S$, which occupies no edge in $H(u_j)$ ($1 \le j \le n$). Thus, the total number of the internally disjoint Steiner trees connecting $S$ is $ml$, as desired. + +From the above arguments, we conclude that, for any $S \subseteq V(G \circ H)$, $\kappa_{G \circ H}(S) \ge \kappa(\bigcup_{i=1}^l T_i) \circ H(S) \ge ml$, which implies that $\kappa_3(G \circ H) \ge ml = \kappa_3(G)|V(G)|$. The proof is complete. + +# 3 Upper bounds of $\kappa_3(G \circ H)$ and $\kappa_3(G \square H)$ + +Li et al. [29] obtained the following results. + +**Lemma 3.1** [29] For any connected graph $G$, $\kappa_3(G) \le \kappa(G)$. Moreover, the upper bound is sharp. + +**Lemma 3.2** [29] Let $G$ be a connected graph with $n$ vertices. For every two integers $s$ and $r$ with $s \ge 0$ and $r \in \{0, 1, 2, 3\}$, if $\kappa(G) = 4s + r$, then $\kappa_3(G) \ge 3s + \lfloor r/2 \rfloor$. Moreover, the lower bound is sharp. + +From Lemmas 3.1, 3.2 and Theorem 1.2, we can derive a sharp upper bound of the generalized 3-connectivity of lexicographic product graphs, which is stated as Theorem 1.5. + +**Proof of Theorem 1.5:** From Lemma 3.2, for a connected graph $G$, if $\kappa(G) = 4s + r$, then $\kappa_3(G) \ge 3s + \lfloor r/2 \rfloor$, where $r \in \{0, 1, 2, 3\}$. So $\kappa_3(G) \ge 3 \cdot \frac{\kappa(G)-r}{4} + \lfloor r/2 \rfloor = \frac{3}{4}\kappa(G) - \frac{3}{4}r + \lfloor r/2 \rfloor$. Therefore, $\kappa(G) \le \lfloor \frac{4}{3}\kappa_3(G) + r - \frac{4}{3}\lfloor r/2 \rfloor \rfloor$. From Lemma 3.1, $\kappa_3(G \circ H) \le \kappa(G \circ H)$. Furthermore, by Theorem 1.2, we have $\kappa_3(G \circ H) \le \kappa(G \circ H) = \kappa(G)|V(H)| \le \lfloor \frac{4}{3}\kappa_3(G) + r - \frac{4}{3}\lfloor r/2 \rfloor \rfloor|V(H)|$, where $r = \lfloor \kappa(G) \mod 4 \rfloor$. The proof is now complete. $\square$ + +The following example indicates that both the lower bound of Theorem 1.4 and the upper bound of Theorem 1.5 are sharp. + +**Example 1.** Let $G$ is a path of order $n$ ($n \ge 4$) and $H$ is a path of order 3. Then $|V(G)| = n$, $|V(H)| = 3$ and $\kappa_3(G) = \kappa_3(H) = 1$. Since $\kappa(G) = 1$, it follows that $r = 1$ and $\kappa_3(G \circ H) \le \lfloor \frac{4}{3}\kappa_3(G) + r - \frac{4}{3}\lfloor r/2 \rfloor \rfloor|V(H)| = 3\lfloor \frac{4}{3}\kappa_3(G) - \frac{1}{3} \rfloor = 3$ by Theorem 1.5. From Theorem 1.4, we have $\kappa_3(G \circ H) \ge \kappa_3(G)|V(H)| = 3$. So $\kappa_3(G \circ H) = 3$. Thus, this is a sharp example for both Theorem 1.4 and Theorem 1.5. + +Let us now turn our attention to the generalized 3-connectivity of Cartesian product graphs. As we know in Theorem 1.3, Li et al. gave a lower bound of $\kappa_3(G \square H)$. We now derive an upper bound of $\kappa_3(G \square H)$. + +From Theorem 1.1, we know that $\kappa(G \square H) \ge \kappa(G) + \kappa(H)$. But we mention that it is incorrectly claimed that $\kappa(G \square H) = \kappa(G) + \kappa(H)$ holds for any connected $G$ and $H$, see [18] (p-308). Let $G$ be a graph obtained from two triangles by identifying one vertex in each of them. Then $\kappa(G) = 1$ and $\kappa(G \square G) = 4 > 2 = \kappa(G) + \kappa(G)$, see [18] (p-309). In [41], Špacapan obtained the following result. + +**Lemma 3.3** [41] Let $G$ and $H$ be two nontrivial graphs. Then + +$$\kappa(G \square H) = \min\{\kappa(G)|V(H)|, \kappa(H)|V(G)|, \delta(G) + \delta(H)\}.$$ + +By the above result, we can derive a sharp upper bound of the generalized 3-connectivity of Cartesian product graphs, which is stated as Theorem 1.6. + +**Proof of Theorem 1.6:** From Lemma 3.2, for a connected graph $G$, if $\kappa(G) = 4s + r_1$, then $\kappa_3(G) \ge 3s + \lfloor r_1/2 \rfloor$, where $r_1 \in \{0, 1, 2, 3\}$. So $\kappa_3(G) \ge 3 \cdot \frac{\kappa(G)-r_1}{4} + \lfloor r_1/2 \rfloor = \frac{3}{4}\kappa(G) - \frac{3}{4}r_1 + \lfloor r_1/2 \rfloor$, where \ No newline at end of file diff --git a/samples/texts/821833/page_14.md b/samples/texts/821833/page_14.md new file mode 100644 index 0000000000000000000000000000000000000000..88764bba5b3725ace19f6ef52ceeaf8c8ef9eb4b --- /dev/null +++ b/samples/texts/821833/page_14.md @@ -0,0 +1,29 @@ +$r_1 \equiv \kappa(G) (\mod 4)$. Therefore, $\kappa(G) \le \frac{4}{3}\kappa_3(G) + r_1 - \left\lfloor \frac{r_1}{2} \right\rfloor$. Similarly, for a connected graph $H$, $\kappa(H) \le \frac{4}{3}\kappa_3(H) + r_2 - \left\lfloor \frac{r_2}{2} \right\rfloor$, where $r_2 \equiv \kappa(H) (\mod 4)$. From Lemma 3.1, $\kappa_3(G \square H) \le \kappa(G \square H)$. Furthermore, by Lemma 3.3, we have $\kappa_3(G \square H) \le \kappa(G \square H) = \min\{\kappa(G)|V(H)|, \kappa(H)|V(G)|, \delta(G)+\delta(H)\} \le \min\{\left\lfloor \frac{4}{3}\kappa_3(G) + r_1 - \frac{4}{3}\left\lfloor \frac{r_1}{2} \right\rfloor \right\rfloor|V(H)|, \left\lfloor \frac{4}{3}\kappa_3(H) + r_2 - \frac{4}{3}\left\lfloor \frac{r_2}{2} \right\rfloor \right\rfloor|V(G)|, \delta(G) + \delta(H)\}$. The proof is now complete. $\square$ + +To show the sharpness of the upper bound of Theorem 1.6, we consider the following example. + +**Example 2.** Let $G$ is a path of order $n$ ($n \ge 4$) and $H$ is a path of order $m$ ($m \ge 4$). Then $\kappa_3(G) = \kappa_3(H) = 1$, $\kappa(G) = \kappa(H) = 1$ and hence $r_1 = r_2 = 1$. From Theorem 1.6, $\kappa_3(G \square H) \le \min\{\left\lfloor \frac{4}{3}\kappa_3(G) + r_1 - \left\lfloor \frac{4}{3}\left\lfloor \frac{r_1}{2} \right\rfloor \right\rfloor\right\rfloor|V(H)|, \left\lfloor \frac{4}{3}\kappa_3(H) + r_2 - \left\lfloor \frac{4}{3}\left\lfloor \frac{r_2}{2} \right\rfloor \right\rfloor\right\rfloor|V(G)|, \delta(G) + \delta(H)\} = \min\{n, m, 2\} = 2$. It can be checked that for any $S \subseteq V(G \square H)$ and $|S| = 3$, $\kappa(S) \ge 2$, which implies $\kappa_3(G \square H) \ge 2$. Thus, $\kappa_3(G \square H) = 2$ and this is a sharp example for Theorem 1.6. + +**Remark 3.** In fact, we can improve the result of Proposition 2.7. From Lemma 3.1, we have $\kappa_3(T \circ H) \le \kappa(T \circ H) = |V(H)|$. Combining this with Proposition 2.7, we have $\kappa_3(T \circ H) = |V(H)|$. From Theorem 1.3, one may wonder whether $\kappa_3(T \square H) = \kappa_3(T) + \kappa_3(H) - 1$ for a connected graph $H$ and a tree $T$ (note that $\kappa_3(T) = \kappa(T) = 1$). For example, let $T = P_3$ and $H = K_4$. Then $\kappa_3(T) = \kappa(T) = 1$ and $\kappa_3(H) = 2$. One can check that $\kappa_3(T \square H) = 3 > 2 = \kappa_3(T) + \kappa_3(H) - 1$. So the equality does not hold for the Cartesian product of a tree and a connected graph. + +## Acknowledgements + +The authors are very grateful to the referees' valuable comments and suggestions, which helped greatly to improve the presentation of this paper. + +## References + +[1] F. Bao, Y. Igarashi, S.R. Öhring, *Reliable broadcasting in product networks*, Discrete Appl. Math. 83(1998), 3-20. + +[2] L.W. Beineke, R.J. Wilson, *Topics in Structural Graph Theory*, Cambridge University Press, 2013. + +[3] A. Blasiak, R. Kleinberg, E. Lubetzky, *Lexicographic products and the power of non-linear network coding*, arXiv: 1108.2489 [math.CO] 2013. + +[4] J.A. Bondy, U.S.R. Murty, *Graph Theory*, GTM 244, Springer, 2008. + +[5] G. Chartrand, S.F. Kapoor, L. Lesniak, D.R. Lick, *Generalized connectivity in graphs*, Bull. Bombay Math. Colloq. 2(1984), 1-6. + +[6] G. Chartrand, F. Okamoto, P. Zhang, *Rainbow trees in graphs and generalized connectivity*, Networks 55(4)(2010), 360-367. + +[7] X. Cheng, D. Du, *Steiner Trees in Industry*, Kluwer Academic Publisher, Dordrecht, 2001. + +[8] D.P. Day, O.R. Oellermann, H.C. Swart, *The l-connectivity function of trees and complete multipartite graphs*, J. Combin. Math. Combin. Comput. 10(1991), 183-192. \ No newline at end of file diff --git a/samples/texts/821833/page_15.md b/samples/texts/821833/page_15.md new file mode 100644 index 0000000000000000000000000000000000000000..fa1be434bc4984ab8e867e88600dad88ea7d9d10 --- /dev/null +++ b/samples/texts/821833/page_15.md @@ -0,0 +1,35 @@ +[9] K. Day, A.E. Al-Ayyoub, *The cross product of interconnection networks*, IEEE Trans. Parallel and Distributed Systems 8(2)(1997), 109-118. + +[10] D. Du, X. Hu, *Steiner Tree Problems in Computer Communication Networks*, World Scientific, 2008. + +[11] M. Feng, M. Xu, K. Wang, *Identifying codes of lexicographic product of graphs*, Electron. J. Combin. 19(4) (2012), 56-63. + +[12] P. Fragopoulou, S. G. Akl, *Edge-disjoint spanning trees on the star network with applications to fault tolerance*, IEEE Trans. Computers 45(2)(1996), 174-185. + +[13] M. Grötschel, *The Steiner tree packing problem in VLSI design*, Math. Program. 78(1997), 265-281. + +[14] M. Grötschel, A. Martin, R. Weismantel, *Packing Steiner trees: A cutting plane algorithm and computational results*, Math. Program. 72(1996), 125-145. + +[15] R. Gu, X. Li, Y. Shi, The generalized 3-connectivity of random graphs, Acta Math. Sinica 57(2) (2014), 321-330. (in Chinese). + +[16] M. Hager, *Pendant tree-connectivity*, J. Combin. Theory 38(1985), 179-189. + +[17] M. Hager, *Path-connectivity in graphs*, Discrete Math. 59(1986), 53-59. + +[18] R. Hammack, W. Imrich, Sandi Klavžr, *Handbook of Product Graphs*, Second Edition, CRC Press, 2011. + +[19] H. R. Hind, O. R. Oellermann, *Menger-type results for three or more vertices*, Congressus Numerantium 113(1996), 179-204. + +[20] A. Itai, M. Rodeh, *The multi-tree approach to reliability in distributed networks*, Infor. & Comput. 79 (1988), 43-59. + +[21] S. Ku, B. Wang, T. Hung, *Constructing edge-disjoint spanning trees in product networks*, Parallel and Distributed Systems, IEEE Trans. Parallel and Disjointed Systems 14(3)(2003), 213-221. + +[22] F. Li, Z. Xu, H. Zhao, W. Wang, *On the number of spanning trees of the lexicographic product of networks*, Sci. China, Ser.F 42(2012), 949-959. + +[23] H. Li, X. Li, Y. Mao, *On extremal graphs with at most two internally disjoint Steiner trees connecting any three vertices*, Bull. Malays. Math. Sci. Soc., in press. + +[24] H. Li, X. Li, Y. Mao, Y. Sun, *Note on the generalized connectivity*, Ars Combin. 114(2014), 193-202. + +[25] H. Li, X. Li, Y. Sun, *The generalized 3-connectivity of Cartesian product graphs*, Discrete Math. Theor. Comput. Sci. 14(1)(2012), 43-54. + +[26] S. Li, W. Li, X. Li, *The generalized connectivity of complete bipartite graphs*, Ars Combin. 104(2012), 65-79. \ No newline at end of file diff --git a/samples/texts/821833/page_16.md b/samples/texts/821833/page_16.md new file mode 100644 index 0000000000000000000000000000000000000000..b9670265b946fa5ee9e62e20f3ef6202836d0501 --- /dev/null +++ b/samples/texts/821833/page_16.md @@ -0,0 +1,35 @@ +[27] S. Li, W. Li, X. Li, *The generalized connectivity of complete equipartition 3-partite graphs*, Bull. Malays. Math. Sci. Soc. (2)37(1)(2014), 103-121. + +[28] S. Li, X. Li, *Note on the hardness of generalized connectivity*, J. Combin. Optim. 24(2012), 389-396. + +[29] S. Li, X. Li, W. Zhou, *Sharp bounds for the generalized connectivity $\kappa_3(G)$*, Discrete Math. 310(2010), 2147-2163. + +[30] X. Li, Y. Mao, *On extremal graphs with at most $\ell$ internally disjoint Steiner trees connecting any $n-1$ vertices*, accepted by Graphs & Combin. + +[31] X. Li, Y. Mao, Y. Sun, *On the generalized (edge-)connectivity of graphs*, Australasian J. Combin. 58(2)(2014), 304-319. + +[32] C.St.J.A. Nash-Williams, *Edge-disjoint spanning trees of finite graphs*, J. London Math. Soc. 36(1961), 445-450. + +[33] O.R. Oellermann, *Connectivity and edge-connectivity in graphs: A survey*, Congressus Numerantium 116(1996), 231-252. + +[34] O.R. Oellermann, *On the $\ell$-connectivity of a graph*. Graphs & Combin. 3(1987), 285-299. + +[35] O.R. Oellermann, *A note on the $\ell$-connectivity function of a graph*, Congressus Numerantium 60 (1987), 181-188. + +[36] F. Okamoto, P. Zhang, *The tree connectivity of regular complete bipartite graphs*, J. Combin. Math. Combin. Comput. 74(2010), 279-293. + +[37] K. Ozeki, T. Yamashita, *Spanning trees: A survey*, Graphs & Combin. 27(1)(2011), 1-26. + +[38] E. Palmer, *On the spanning tree packing number of a graph: A survey*, Discrete Math. 230(2001), 13-21. + +[39] G. Sabidussi, *Graphs with given group and given graph theoretical properties*, Canadian J. Math. 9(1957), 515-525. + +[40] N. A. Sherwani, *Algorithms for VLSI Physical Design Automation*, 3rd Edition, Kluwer Acad. Pub., London, 1999. + +[41] S. Špacapan, *Connectivity of Cartesian products of graphs*, Appl. Math. Lett. 21(2008), 682-685. + +[42] D. West, *Introduction to Graph Theory (Second Edition)*, Prentice Hall, 2001. + +[43] H. Whitney, *Congruent graphs and the connectivity of graphs*, Amer. J. Math. 54(1932), 150-168. + +[44] C. Yang, J. Xu, *Connectivity of lexicographic product and direct product of graphs*, Ars Combin. 111(2013), 3-12. \ No newline at end of file diff --git a/samples/texts/821833/page_17.md b/samples/texts/821833/page_17.md new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/samples/texts/821833/page_2.md b/samples/texts/821833/page_2.md new file mode 100644 index 0000000000000000000000000000000000000000..17c94369f8bd4929bf8fe07c9cd39b9ed6a599ec --- /dev/null +++ b/samples/texts/821833/page_2.md @@ -0,0 +1,25 @@ +# The generalized 3-connectivity of lexicographic product graphs* + +Xueliang Li¹† and Yaping Mao¹,²‡ + +¹Center for Combinatorics and LPMC-TJKLC, Nankai University, Tianjin 300071, China + +²Department of Mathematics, Qinghai Normal University, Xining, Qianghai 810008, China + +received 12th Aug. 2013, revised 26th Dec. 2013, 10th Apr. 2014, accepted 17th May 2014. + +The generalized *k*-connectivity $\kappa_k(G)$ of a graph $G$, first introduced by Hager, is a natural generalization of the concept of (vertex-)connectivity. Denote by $G \circ H$ and $G \square H$ the lexicographic product and Cartesian product of two graphs $G$ and $H$, respectively. In this paper, we prove that for any two connected graphs $G$ and $H$, $\kappa_3(G \circ H) \ge \kappa_3(G)|V(H)|$. We also give upper bounds for $\kappa_3(G \square H)$ and $\kappa_3(G \circ H)$. Moreover, all the bounds are sharp. + +**Keywords:** Connectivity, Steiner tree, Internally disjoint Steiner trees, Packing, Generalized connectivity, Lexicographic product. + +## 1 Introduction + +All graphs considered in this paper are undirected, finite and simple. We refer to the book [4] for graph theoretical notation and terminology not described here. For a graph $G$, let $V(G)$, $E(G)$ and $\delta(G)$ denote the set of vertices, the set of edges and the minimum degree of $G$, respectively. As usual, $|V(G)|$ is called the order of $G$. For $S \subseteq V(G)$, we denote by $G \setminus S$ the subgraph obtained by deleting from $G$ the vertices of $S$ together with the edges incident with them. We divide our introduction into the following four subsections to state the motivations and our results of this paper. + +### 1.1 Connectivity and its generalizations + +Connectivity is one of the most basic concepts in graph theory, both in combinatorial sense and in algorithmic sense. The classical connectivity has two equivalent definitions. The connectivity of $G$, written $\kappa(G)$, is the minimum size of a vertex set $S \subseteq V(G)$ such that $G \setminus S$ is disconnected or has only one vertex. We call this definition the ‘cut’ version definition of connectivity. A well-known theorem of Whitney [43] + +*Supported by NSFC No. 11371205 and PCSIRT. +†Email: lxl@nankai.edu.cn +‡Email: maoyaping@ymail.com \ No newline at end of file diff --git a/samples/texts/821833/page_3.md b/samples/texts/821833/page_3.md new file mode 100644 index 0000000000000000000000000000000000000000..1dbbd907ca4c6ac9c238171ca56df173c0587712 --- /dev/null +++ b/samples/texts/821833/page_3.md @@ -0,0 +1,36 @@ +provides an equivalent definition of connectivity, which can be called the ‘path’ version definition of con- +nectivity. For any two distinct vertices $x$ and $y$ in $G$, the local connectivity $\kappa_G(x, y)$ is the maximum num- +ber of internally disjoint paths connecting $x$ and $y$. Then $\kappa(G) = \min\{\kappa_G(x, y) | x, y \in V(G), x \neq y\}$ +is defined to be the connectivity of $G$. + +Although there are many elegant and powerful results on connectivity in graph theory, the basic notation of classical connectivity may not be general enough to capture some computational settings. So people tried to generalize this concept. For the ‘cut’ version definition of connectivity, we find that the above minimum vertex set does not regard to the number of components of $G \setminus S$. Two graphs with the same connectivity may have different degrees of vulnerability in the sense that the deletion of a vertex cut-set of minimum cardinality from one graph may produce a graph with considerably more components than in the case of the other graph. For example, the star $K_{1,n}$ and the path $P_{n+1}$ ($n \ge 3$) are both trees of order $n+1$ and therefore connectivity 1, but the deletion of a cut-vertex from $K_{1,n}$ produces a graph with $n$ components while the deletion of a cut-vertex from $P_{n+1}$ produces only two components. Chartrand et al. [5] generalized the ‘cut’ version definition of connectivity. For an integer $k$ ($k \ge 2$) and a graph $G$ of order $n$ ($n \ge k$), the *k*-connectivity $\kappa'_k(G)$ is the smallest number of vertices whose removal from $G$ of order $n$ ($n \ge k$) produces a graph with at least $k$ components or a graph with fewer than $k$ vertices. Thus, for $k=2$, $\kappa'_2(G) = \kappa(G)$. For more details about *k*-connectivity, we refer to [5, 8, 34, 35]. + +The generalized connectivity of a graph $G$, mentioned by Hager in [16], is a natural generalization of +the ‘path’ version definition of connectivity. For a graph $G = (V, E)$ and a set $S \subseteq V(G)$ of at least +two vertices, an *S-Steiner tree* or a *Steiner tree connecting S* (or simply, an *S-tree*) is a such subgraph +$T = (V', E')$ of $G$ that is a tree with $S \subseteq V'$. Note that when $|S| = 2$ a minimal Steiner tree connecting +$S$ is just a path connecting the two vertices of $S$. Two Steiner trees $T$ and $T'$ connecting $S$ are said to +be *internally disjoint* if $E(T) \cap E(T') = \emptyset$ and $V(T) \cap V(T') = S$. For $S \subseteq V(G)$ and $|S| \ge 2$, +the *generalized local connectivity* $\kappa_G(S)$ is the maximum number of internally disjoint trees connecting +$S$ in $G$, that is, we search for the maximum cardinality of edge-disjoint trees which contain $S$ and are +vertex-disjoint with the exception of the vertices in $S$. For an integer $k$ with $2 \le k \le n$, the *generalized +$k$-connectivity* (or *k-tree-connectivity*) is defined as $\kappa_k(G) = \min\{\kappa_G(S) | S \subseteq V(G), |S| = k\}$, that is, +$\kappa_k(G)$ is the minimum value of $\kappa_G(S)$ when $S$ runs over all $k$-subsets of $V(G)$. Clearly, when $|S| = 2$, +$\kappa_2(G)$ is nothing new but the connectivity $\kappa(G)$ of $G$, that is, $\kappa_2(G) = \kappa(G)$, which is the reason why +one addresses $\kappa_k(G)$ as the generalized connectivity of $G$. By convention, for a connected graph $G$ with +less than $k$ vertices, we set $\kappa_k(G) = 1$, and $\kappa_k(G) = 0$ when $G$ is disconnected. + +Note that the generalized *k*-connectivity and the *k*-connectivity of a graph are indeed different. Take for +example, the graph $G_0$ obtained from a triangle with vertex set $\{v_1, v_2, v_3\}$ by adding three new vertices +$u_1, u_2, u_3$ and joining $v_i$ to $u_i$ by an edge for $1 \le i \le 3$. Then $\kappa_3(G_0) = 1$ but $\kappa'_3(G_0) = 2$. There are +many results on the generalized *k*-connectivity; see [6, 15, 23, 24, 25, 26, 27, 28, 29, 30, 31, 36]. + +## 1.2 The application background of generalized connectivity + +One extreme of $\kappa_k(G)$ is when $k=2$. As we mentioned in the last subsection, $\kappa_2(G) = \kappa(G)$ is just +the connectivity of a graph $G$. Another extreme of $\kappa_k(G)$ is when $k=n$. For $k=n$, one can see that +$S=V(G)$ and $\kappa_n(G)$ is just the maximum number of edge-disjoint spanning trees in $G$ (For $k=n$, each +Steiner tree connecting $S$ is a spanning tree of $G$). Then $\kappa_n(G)$ is called the *spanning-tree packing number* +of $G$. For the spanning-tree packing number, we refer to [37, 38]. For a given graph $G$, the problem of +finding out the spanning-tree packing number of $G$ is called the *spanning tree packing problem*. Note that +spanning tree packing problem is a special case of the generalized *k*-connectivity. \ No newline at end of file diff --git a/samples/texts/821833/page_4.md b/samples/texts/821833/page_4.md new file mode 100644 index 0000000000000000000000000000000000000000..c5c5e65f61717de21392e4a8aedda3cd56f37742 --- /dev/null +++ b/samples/texts/821833/page_4.md @@ -0,0 +1,22 @@ +Besides being of theoretical interest, spanning tree packing problem has its practical applications. One of them is to enhance the ability of fault tolerance in a network; see [12, 20]. Consider a source node *u* that wants to broadcast a message on a network with *ℓ* edge-disjoint spanning trees. The node *u* copies *ℓ* messages to different spanning trees. If there are no more than *ℓ* − 1 fault edges, all the other nodes can receive the message. In fact, the generalized *k*-connectivity $\kappa_k(G)$, which is a generalization of the spanning-tree packing number $\kappa_n(G)$, can be seen as another parameter to express the ability of fault tolerance of a network, where any two Steiner trees connecting the node set *S* in *G* are required to share no Steiner vertex (Each vertex in $V(G) \setminus S$ is called a *Steiner vertex*). + +In addition to being a natural combinatorial measure, the generalized *k*-connectivity can be motivated by its interesting interpretation in practice. For example, suppose that *G* represents a network. If one considers to connect a pair of vertices of *G*, then a path is used to connect them. However, if one wants to connect a set *S* of vertices of *G* with $|S| \ge 3$, then a tree has to be used to connect them. This kind of tree for connecting a set of vertices is usually called a *Steiner tree*, and popularly used in the physical design of VLSI circuits (see [13, 14, 40]). In this application, a Steiner tree is needed to share an electric signal by a set of terminal nodes. Steiner tree is also used in computer communication networks (see [10]) and optical wireless communication networks (see [7]). Usually, one wants to consider how tough a network can be, for the connection of a set of vertices. Then, the number of totally independent ways to connect them is a measure for this purpose. The generalized *k*-connectivity can serve for measuring the capability of a network *G* to connect any *k* vertices in *G*. + +## 1.3 Graph products and the parameters $\kappa$ and $\kappa_n$ + +The Cartesian product of two graphs $G$ and $H$, written as $G \square H$, is the graph with vertex set $V(G) \times V(H)$, in which two vertices $(u, v)$ and $(u', v')$ are adjacent if and only if $u = u'$ and $(v, v') \in E(H)$, or $v = v'$ and $(u, u') \in E(G)$. Clearly, the Cartesian product is commutative, that is, $G \square H$ is isomorphic to $H \square G$. The lexicographic product of two graphs $G$ and $H$, written as $G \circ H$, is defined as follows: $V(G \circ H) = V(G) \times V(H)$, and two distinct vertices $(u, v)$ and $(u', v')$ of $G \circ H$ are adjacent if and only if either $(u, u') \in E(G)$ or $u = u'$ and $(v, v') \in E(H)$. Note that unlike the Cartesian product, the lexicographic product is a non-commutative product since $G \circ H$ is usually not isomorphic to $H \circ G$. + +Product networks were proposed based upon the idea of using the cross product as a tool for “combin- +ing” two known graphs with established properties to obtain a new one that inherits properties from both +[9]. There has been an increasing interest in a class of interconnection networks called Cartesian product +networks; see [1, 9, 21]. In [21], Ku et al. studied the problem of constructing the maximum number of +edge-disjoint spanning trees in Cartesian product networks, and gave a sharp lower bound of $\kappa_n(G \square H)$. +A natural question is to study sharp upper and lower bounds of $\kappa_k(G \square H)$. For $k = 2$, Sabidussi [39] +derived the following theorem on the connectivity of Cartesian product graphs. + +**Theorem 1.1** [39] Let $G$ and $H$ be two connected graphs. Then $\kappa(G \square H) \geq \kappa(G) + \kappa(H)$. + +Lexicographic product is also studied extensively; see [18]. Recently, some applications in networks of the lexicographic product were studied; see [3, 11, 22]. Here we will study the sharp upper and lower bounds of $\kappa_k(G \circ H)$. For $k = 2$, Yang and Xu [44] investigated the classical connectivity of the lexicographic product of two graphs. + +**Theorem 1.2** [44] Let *G* and *H* be two graphs. If *G* is non-trivial, non-complete and connected, then +$\kappa(G \circ H) = \kappa(G)|V(H)|$. \ No newline at end of file diff --git a/samples/texts/821833/page_5.md b/samples/texts/821833/page_5.md new file mode 100644 index 0000000000000000000000000000000000000000..5a848695399a6b3517a2312b68a96a66d847ec7e --- /dev/null +++ b/samples/texts/821833/page_5.md @@ -0,0 +1,31 @@ +## 1.4 Graph products and the parameter $κ_3$ + +In [25], Li et al. studied the generalized 3-connectivity of Cartesian product graphs and got a lower bound of it. Their result could be seen as an extension of Sabidussi's theorem. + +**Theorem 1.3** [25] Let $G$ and $H$ be connected graphs such that $κ_3(G) ≥ κ_3(H)$. The following assertions hold: + +(i) If $κ(G) = κ_3(G)$, then $κ_3(G□H) ≥ κ_3(G) + κ_3(H) - 1$. Moreover, the bound is sharp; + +(ii) If $κ(G) > κ_3(G)$, then $κ_3(G□H) ≥ κ_3(G) + κ_3(H)$. Moreover, the bound is sharp. + +In Section 2, we obtain the following lower bound of $κ_3(G ○ H)$, which is the main result of this paper. This result could be seen as an extension of Theorem 1.2. + +**Theorem 1.4** Let $G$ and $H$ be two connected graphs. Then + +$$κ_3(G \circ H) ≥ κ_3(G)|V(H)|.$$ + +Moreover, the bound is sharp. + +In Section 3, we derive the upper bounds of $κ_3(G \circ H)$ and $κ_3(G \square H)$. + +**Theorem 1.5** Let $G$ and $H$ be two connected graphs. If $G$ is non-trivial and non-complete, then $κ_3(G \circ H) ≤ ⌊(4/3)κ_3(G) + r - (4/3)[r/2]⌋|V(H)|$, where $r ≡ κ(G) (mod 4)$. Moreover, the bound is sharp. + +**Theorem 1.6** Let $G$ and $H$ be two connected graphs. Then $κ_3(G \square H) ≤ \min\{⌊(4/3)κ_3(G) + r_1 - (4/3)[r_1/2]⌋|V(H)|, ⌊(4/3)κ_3(H)+r_2-4/3[rt/2]⌋|V(G)|, δ(G)+δ(H)\}$, where $r_1 ≡ κ(G) (mod 4)$ and $r_2 ≡ κ(H) (mod 4)$. Moreover, the bound is sharp. + +# 2 Lower bound of $κ_3(G \circ H)$ + +In this section, let $G$ and $H$ be two connected graphs with $V(G) = \{u_1, u_2, \dots, u_n\}$ and $V(H) = \{v_1, v_2, \dots, v_m\}$, respectively. Then $V(G \circ H) = \{(u_i, v_j) | 1 ≤ i ≤ n, 1 ≤ j ≤ m\}$. For $v ∈ V(H)$, we use $G(v)$ to denote the subgraph of $G \circ H$ induced by the vertex set $\{(u_i, v) | 1 ≤ i ≤ n\}$. Similarly, for $u ∈ V(G)$, we use $H(u)$ to denote the subgraph of $G \circ H$ induced by the vertex set $\{(u, v_j) | 1 ≤ j ≤ m\}$. In the sequel, let $K_{s,t}$, $K_n$ and $P_n$ denote the complete bipartite graph of order $s+t$ with part sizes $s$ and $t$, complete graph of order $n$, and path of order $n$, respectively. If $G$ is a connected graph and $x, y ∈ V(G)$, then the distance $d_G(x, y)$ between $x$ and $y$ is the length of a shortest path connecting $x$ and $y$ in $G$. The degree of a vertex $v$ in $G$ is denoted by $d_G(v)$. + +We now introduce the general idea of the proof of Theorem 1.4, with a running example (corresponding to Fig. 1). From the definition, the lexicographic product graph $G \circ H$ is a graph obtained by replacing each vertex of $G$ by a copy of $H$ and replacing each edge of $G$ by a complete bipartite graph $K_{m,m}$. Recall that $V(G) = \{u_1, u_2, \dots, u_n\}$. Clearly, $V(G \circ H) = ⋃_{i=1}^n V(H(u_i))$. Take for example, let $G = K_4$ (see Fig. 1 (a)). Set $V(K_4) = \{u_1, u_2, u_3, u_4\}$ and $|V(H)| = m$. Then $K_4 \circ H$ is a graph obtained by replacing each vertex of $K_4$ by a copy of $H$ and replacing each edge of $K_4$ by a complete bipartite graph $K_{m,m}$ (see Fig. 1 (e)). Clearly, $V(K_4 \circ H) = ⋃_{i=1}^4 V(H(u_i))$ (see Fig. 1 (e)). + +In this section, we give the proof of Theorem 1.4. For two connected graphs $G$ and $H$, we prove that $κ_3(G \circ H) ≥ κ_3(G)|V(H)|$. Set $κ_3(G) = l$ and $|V(H)| = m$. From the definition of $κ_3(G \circ H)$, it suffices to show that $κ_{G \circ H}(S) ≥ lm$ for any $S ⊆ V(G \circ H)$ and $|S| = 3$. From the definition of \ No newline at end of file diff --git a/samples/texts/821833/page_6.md b/samples/texts/821833/page_6.md new file mode 100644 index 0000000000000000000000000000000000000000..74fd41567457cb7f1a81bbcd34b30bb1ca8b2f55 --- /dev/null +++ b/samples/texts/821833/page_6.md @@ -0,0 +1,7 @@ +$\kappa_{G \circ H}(S)$, we need to find out $\ell m$ internally disjoint Steiner trees connecting $S$ in $G \circ H$. Let $S = \{x, y, z\}$. Recall that $V(G) = \{u_1, u_2, \dots, u_n\}$. From the above analysis, we know that $x, y, z \in V(G \circ H) = \bigcup_{i=1}^n V(H(u_i))$. Without loss of generality, let $x \in H(u_i)$, $y \in H(u_j)$ and $z \in H(u_k)$ (note that $u_i, u_j, u_k$ are not necessarily different). For the above example, we have $x, y, z \in V(K_4 \circ H) = \bigcup_{i=1}^4 V(H(u_i))$. Without loss of generality, let $x \in H(u_1)$, $y \in H(u_2)$ and $z \in H(u_3)$ (see Fig. 1 (e)). + +Fig. 1: An example for $G \circ H$. + +Because $u_i, u_j, u_k \in V(G)$ and $\kappa_3(G) = \ell$, there are $\ell$ internally disjoint Steiner trees connecting $\{u_i, u_j, u_k\}$, say $T_1, T_2, \dots, T_\ell$. Note that $\bigcup_{i=1}^\ell T_i$ is a subgraph of $G$. Thus $(\bigcup_{i=1}^\ell T_i) \circ H$ is a subgraph of $G \circ H$. For the above example, we have $\kappa_3(G) = \kappa_3(K_4) = \ell = 2$. It suffices to prove that $\kappa_3(G \circ H) \ge \kappa_3(K_4)|V(H)| = 2m$. Then there are $\ell = 2$ internally disjoint Steiner trees connecting $\{u_1, u_2, u_3\}$, say $T_1, T_2$ (see Fig. 1 (b), (c)). Note that $T_1 \cup T_2$ is a subgraph of $G$ (see Fig. 1 (a), (d)). Then $(T_1 \cup T_2) \circ H$ is a subgraph of $G \circ H$ (see Fig. 1 (e), (h)). + +If we can prove that $\kappa(\bigcup_{i=1}^\ell T_i) \circ H(S) \ge \ell m$ for $S = \{x, y, z\}$, then $\kappa_{G \circ H}(S) \ge \kappa(\bigcup_{i=1}^\ell T_i) \circ H(S) \ge \ell m$ since $(\bigcup_{i=1}^\ell T_i) \circ H$ is a subgraph of $G \circ H$. Therefore, the problem is converted into finding out $\ell m$ internally disjoint Steiner trees connecting $S$ in $(\bigcup_{i=1}^\ell T_i) \circ H$. Observe that $(\bigcup_{i=1}^\ell T_i) \circ H = \bigcup_{i=1}^\ell (T_i \circ H)$. The structure of $\bigcup_{i=1}^\ell (T_i \circ H)$ is shown in Fig. 2. In order to show this structure clearly, we take $\ell$ copies of $H(u_j)$, and $\ell$ copies of $H(u_k)$. Note that, these $\ell$ copies of $H(u_j)$ (resp. $H(u_k)$) represent the same graph. For the above example, if we can prove that $\kappa_{(T_1 \cup T_2) \circ H}(S) \ge 2m$ for $S = \{x, y, z\}$, then $\kappa_{G \circ H}(S) \ge \kappa_{(T_1 \cup T_2) \circ H}(S) \ge 2m$, as desired. Note that $(T_1 \cup T_2) \circ H = (T_1 \circ H) \cup (T_2 \circ H)$. The problem is converted into finding out $2m$ internally disjoint Steiner trees connecting $S$ in $(T_1 \circ H) \cup (T_2 \circ H)$ (see Fig. 1 (h)). \ No newline at end of file diff --git a/samples/texts/821833/page_7.md b/samples/texts/821833/page_7.md new file mode 100644 index 0000000000000000000000000000000000000000..d77841712c209b70c9b57c4fff9326c93bdaeed0 --- /dev/null +++ b/samples/texts/821833/page_7.md @@ -0,0 +1,17 @@ +For each $T_i$ ($1 \le i \le \ell$), if we can find out $m$ internally disjoint Steiner trees connecting $S$ in $T_i \circ H$, say $T_{i,1}, T_{i,2}, \dots, T_{i,m}$, then the total number of internally disjoint Steiner trees connecting $S$ in $\bigcup_{i=1}^k (T_i \circ H) = (\bigcup_{i=1}^\ell T_i) \circ H$ are $\ell m$, which implies that $\kappa_{G \circ H}(S) \ge \kappa_{(\bigcup_{i=1}^\ell T_i) \circ H}(S) \ge \ell m$ (Note that we must guarantee that any two trees of $\{T_{i,j} \mid 1 \le i \le \ell, 1 \le j \le m\}$ are internally disjoint). Furthermore, from the arbitrariness of $S$, we can get $\kappa_3(G \circ H) \ge \ell m = \kappa_3(G)|V(H)|$ and complete the proof of Theorem 1.4. For the above example, we need to find out $m$ internally disjoint Steiner trees connecting $S$ in both $T_1 \circ H$ and $T_2 \circ H$ (see Fig. 1 (f), (g)). Then the total number of internally disjoint Steiner trees connecting $S$ in $(T_1 \cup T_2) \circ H = (T_1 \circ H) \cup (T_2 \circ H)$ are $2m$, which implies that $\kappa_{G \circ H}(S) \ge \kappa_{(T_1 \cup T_2) \circ H}(S) \ge 2m$. Thus the result follows by the arbitrariness of $S$. + +**Fig. 2:** The structure of $(\bigcup_{i=1}^k T_i) \circ H = \bigcup_{i=1}^k (T_i \circ H)$. + +From the above analysis, we need to consider the graph $T \circ H$ and prove that for any $S = \{x, y, z\} \subseteq V(T \circ H)$ there are $m$ internally disjoint Steiner trees connecting $S$ in $T \circ H$, where $T$ is a Steiner tree connecting $\{u_i, u_j, u_k\}$ in $G$. If so, then $\kappa_{T \circ H}(S) \ge m$ for any $S = \{x, y, z\} \subseteq V(T \circ H)$, which implies that $\kappa_3(T \circ H) \ge m = |V(H)|$. + +In the basis of such an idea, we study the generalized 3-connectivity of the lexicographic product of a tree $T$ and a connected graph $H$ first, and show that $\kappa_3(T \circ H) \ge |V(H)|$ in Subsection 2.2. After this preparation, we consider the graph $G \circ H$ where $G$ is a general (connected) graph and prove $\kappa_3(G \circ H) \ge \kappa_3(G)|V(H)|$ in Subsection 2.3. In Subsection 2.1, we investigate the generalized 3-connectivity of the lexicographic product of a path $P_n$ and a connected graph $H$ since a path is a special tree. So the proof of Theorem 1.4 can be divided into the above mentioned three subsections. Each previous subsection is a preparation of the subsequent one. + +Before realizing the above three steps, we introduce the following two well known lemmas, which will be used later. + +Given a vertex $x$ and a set $U$ of vertices, an $(x, U)$-fan is a set of paths from $x$ to $U$ such that any two of them share only the vertex $x$. The size of a $(x, U)$-fan is the number of internally disjoint paths from $x$ to $U$. + +**Lemma 2.1** (Fan Lemma, [42], p-170) A graph is k-connected if and only if it has at least $k+1$ vertices and, for every choice of $x, U$ with $|U| \ge k$, it has an $(x, U)$-fan of size $k$. + +**Lemma 2.2** (Expansion Lemma, [42], p-162) If $G$ is a $k$-connected graph, and $G'$ is obtained from $G$ by adding a new vertex $y$ with at least $k$ neighbors in $G$, then $G'$ is a $k$-connected. + +Let $G$ be a $k$-connected graph. Choose $U \subseteq V(G)$ with $|U| = k$. Then the graph $G'$ is obtained from $G$ by adding a new vertex $y$ and joining each vertex of $U$ and the vertex $y$. We call this operation an \ No newline at end of file diff --git a/samples/texts/821833/page_8.md b/samples/texts/821833/page_8.md new file mode 100644 index 0000000000000000000000000000000000000000..aadc426aac9cf4a094ddf2923b73825cbe71ba32 --- /dev/null +++ b/samples/texts/821833/page_8.md @@ -0,0 +1,23 @@ +expansion operation at $y$ and $U$. Denote the resulting graph $G'$ by $G' = G \lor \{y, U\}$. + +## 2.1 The Lexicographic product of a path and a connected graph + +To start with, we show the following proposition, which is a preparation of the next subsection. + +**Proposition 2.3** Let $H$ be a connected graph and $P_n$ be a path with $n$ vertices. Then $\kappa_3(P_n \circ H) \ge |V(H)|$. Moreover, the bound is sharp. + +Set $V(H) = \{v_1, v_2, \dots, v_m\}$ and $V(P_n) = \{u_1, u_2, \dots, u_n\}$. Let $u_1, u_2, \dots, u_n$ be the linear order of vertices on the path $P_n$. It suffices to show that $\kappa_{P_n \circ H}(S) \ge m$ for any $S = \{x, y, z\} \subseteq V(P_n \circ H)$, that is, there exist $m$ internally disjoint Steiner trees connecting $S$ in $P_n \circ H$. We proceed our proof by the following three lemmas. + +**Lemma 2.4** If $x, y, z$ belongs to the same $V(H(u_i))$ ($1 \le i \le n$), then there exist $m$ internally disjoint Steiner trees connecting $S$. + +*Proof:* Without loss of generality, we assume $x, y, z \in V(H(u_1))$. Then the trees $T_j = x(u_2, v_j) \cup y(u_2, v_j) \cup z(u_2, v_j)$ ($1 \le j \le m$) are $m$ internally disjoint Steiner trees connecting $S$, as desired. $\square$ + +**Lemma 2.5** If only two vertices of $\{x, y, z\}$ belong to some copy $H(u_i)$ ($1 \le i \le n$), then there exist $m$ internally disjoint Steiner trees connecting $S$. + +*Proof:* We may assume $x, y \in V(H(u_1))$ and $z \in V(H(u_i))$ ($2 \le i \le n-1$). In the following argument, we can see that this assumption has no influence on the correctness of our proof. Consider the case $i \ge 3$. Let $P' = u_2u_3\cdots u_n$. Clearly, $\kappa(P' \circ H) \ge m$. From Lemma 2.1, there is a $z, U$-fan in $P' \circ H$, where $U = V(H(u_2)) = \{(u_2, v_j) | 1 \le j \le m\}$. Thus there exist $m$ internally disjoint paths $P_1, P_2, \dots, P_m$ such that $P_j$ ($1 \le j \le m$) is a path connecting $z$ and $(u_2, v_j)$. Furthermore, the trees $T_j = x(u_2, v_j) \cup y(u_2, v_j) \cup P_j$ ($1 \le j \le m$) are $m$ internally disjoint Steiner trees connecting $S$. + +Now we assume $i=2$. We may assume $x, y \in V(H(u_1))$ and $z \in V(H(u_2))$. Let $x', y'$ be the vertices corresponding to $x, y$ in $H(u_2)$, $z'$ be the vertex corresponding to $z$ in $H(u_1)$. Clearly, $H(u_1)$ is connected and so there is a path $P_1$ connecting $x$ and $y$ in $H(u_1)$. + +Fig. 3: Graphs for Lemma 2.5. + +If $z' \notin \{x, y\}$, without loss of generality, let $\{x, y, z'\} = \{(u_1, v_j) | 1 \le j \le 3\}$ and $\{x', y', z\} = \{(u_2, v_j) | 1 \le j \le 3\}$, then the trees $T_j = x(u_2, v_j) \cup y(u_2, v_j) \cup (u_1, v_j)(u_2, v_j) \cup z(u_1, v_j)$ ($4 \le j \le m$) \ No newline at end of file diff --git a/samples/texts/821833/page_9.md b/samples/texts/821833/page_9.md new file mode 100644 index 0000000000000000000000000000000000000000..ecf5b5dc30974ad6eebd92146b60cfb68df760ce --- /dev/null +++ b/samples/texts/821833/page_9.md @@ -0,0 +1,19 @@ +and $T_1 = xx' \cup x'y \cup yz$ and $T_2 = xz \cup P_1$ and $T_3 = xy' \cup y'z' \cup yy' \cup zz'$ are *m* internally disjoint Steiner trees connecting S; see Fig. 3 (a). + +If $z' \in \{x, y\}$, without loss of generality, let $z' = y$, $\{x, y\} = \{(u_1, v_1), (u_1, v_2)\}$ and $\{x', z\} = \{(u_2, v_1), (u_2, v_2)\}$, then the trees $T_j = x(u_2, v_j) \cup y(u_2, v_j) \cup (u_1, v_j)(u_2, v_j) \cup z(u_1, v_j)$ ($3 \le j \le m$) and $T_1 = xz \cup xx' \cup yx'$ and $T_2 = yz \cup P_1$ are *m* internally disjoint Steiner trees connecting S; see Fig. 3 (b). The proof is complete. $\square$ + +**Lemma 2.6** If $x, y, z$ are contained in distinct $H(u_i)s$, then there exist *m* internally disjoint Steiner trees connecting S. + +**Proof:** We have the following cases to consider. + +**Case 1.** $d_{P_n \circ H}(x, y) = d_{P_n \circ H}(y, z) = 1$. + +We may assume that $x \in V(H(u_1))$, $y \in V(H(u_2))$, $z \in V(H(u_3))$. In the following argument, we can see that this assumption has no influence on the correctness of our proof. Let $y', z'$ be the vertices corresponding to $y, z$ in $H(u_1)$, $x', z''$ be the vertices corresponding to $x, z$ in $H(u_2)$ and $x'', y''$ be the vertices corresponding to $x, y$ in $H(u_3)$. + +If $x, y', z'$ are distinct vertices in $H(u_1)$, without loss of generality, let $\{x, y', z'\} = \{(u_1, v_j) | 1 \le j \le 3\}$ and $\{x', y, z''\} = \{(u_2, v_j) | 1 \le j \le 3\}$ and $\{x'', y'', z\} = \{(u_3, v_j) | 1 \le j \le 3\}$, then the trees $T_j = x(u_2, v_j) \cup (u_1, v_j)(u_2, v_j) \cup y(u_1, v_j) \cup z(u_2, v_j)$ ($4 \le j \le m$) and $T_1 = xx' \cup x'z' \cup x'z \cup z'y$ and $T_2 = xz'' \cup zz'' \cup yz'' \cup yy'$ and $T_3 = xy \cup yz$ are *m* internally disjoint Steiner trees connecting S; see Fig. 4 (a). + +Fig. 4: Graphs for Case 1 of Lemma 2.6. + +Suppose that two of $x, y', z'$ are the same vertex in $H(u_1)$. If $y' = z'$, without loss of generality, let $\{x, y'\} = \{(u_1, v_1), (u_1, v_2)\}$ and $\{x', y\} = \{(u_2, v_1), (u_2, v_2)\}$ and $\{x'', z\} = \{(u_3, v_1), (u_3, v_2)\}$, then the trees $T_j = x(u_2, v_j) \cup (u_1, v_j)(u_2, v_j) \cup y(u_1, v_j) \cup z(u_2, v_j)$ ($3 \le j \le m$) and $T_1 = xy \cup yz$ and $T_2 = xx' \cup x'x'' \cup yx'' \cup x'z$ are *m* internally disjoint Steiner trees connecting S; see Fig. 4 (b). The other cases ($x = y'$ or $x = z'$) can be proved with similar arguments. + +Suppose that $x, y', z'$ are the same vertex in $H(u_1)$. Without loss of generality, let $x = (u_1, v_1)$, $y = (u_2, v_1)$ and $z = (u_3, v_1)$. Then the trees $T_j = x(u_2, v_j) \cup (u_1, v_j)(u_2, v_j) \cup y(u_1, v_j) \cup z(u_2, v_j)$ ($2 \le j \le m$) and $T_1 = xy \cup yz$ are *m* internally disjoint Steiner trees connecting S. \ No newline at end of file