Q
stringlengths
4
3.96k
A
stringlengths
1
3k
Result
stringclasses
4 values
Theorem 14.1.4 The \( p \) norms do indeed satisfy the axioms of a norm.
Proof: It is obvious that \( \parallel \cdot {\parallel }_{p} \) does indeed satisfy most of the norm axioms. The only one that is not clear is the triangle inequality. To save notation write \( \parallel \cdot \parallel \) in place of \( \parallel \cdot {\parallel }_{p} \) in what follows. Note also that \( \frac{p}{{p}^{\prime }} = p - 1 \) . Then using the Holder inequality,\n\n\[ \parallel \mathbf{x} + \mathbf{y}{\parallel }^{p} = \mathop{\sum }\limits_{{i = 1}}^{n}{\left| {x}_{i} + {y}_{i}\right| }^{p} \]\n\n\[ \leq \mathop{\sum }\limits_{{i = 1}}^{n}{\left| {x}_{i} + {y}_{i}\right| }^{p - 1}\left| {x}_{i}\right| + \mathop{\sum }\limits_{{i = 1}}^{n}{\left| {x}_{i} + {y}_{i}\right| }^{p - 1}\left| {y}_{i}\right| \]\n\n\[ = \mathop{\sum }\limits_{{i = 1}}^{n}{\left| {x}_{i} + {y}_{i}\right| }^{\frac{p}{{p}^{\prime }}}\left| {x}_{i}\right| + \mathop{\sum }\limits_{{i = 1}}^{n}{\left| {x}_{i} + {y}_{i}\right| }^{\frac{p}{{p}^{\prime }}}\left| {y}_{i}\right| \]\n\n\[ \leq {\left( \mathop{\sum }\limits_{{i = 1}}^{n}{\left| {x}_{i} + {y}_{i}\right| }^{p}\right) }^{1/{p}^{\prime }}\left\lbrack {{\left( \mathop{\sum }\limits_{{i = 1}}^{n}{\left| {x}_{i}\right| }^{p}\right) }^{1/p} + {\left( \mathop{\sum }\limits_{{i = 1}}^{n}{\left| {y}_{i}\right| }^{p}\right) }^{1/p}}\right\rbrack \]\n\n\[ = \parallel \mathbf{x} + \mathbf{y}{\parallel }^{p/{p}^{\prime }}\left( {\parallel \mathbf{x}{\parallel }_{p} + \parallel \mathbf{y}{\parallel }_{p}}\right) \]\n\nso dividing by \( \parallel \mathbf{x} + \mathbf{y}{\parallel }^{p/{p}^{\prime }} \), it follows\n\n\[ \parallel \mathbf{x} + \mathbf{y}{\parallel }^{p}\parallel \mathbf{x} + \mathbf{y}{\parallel }^{-p/{p}^{\prime }} = \parallel \mathbf{x} + \mathbf{y}\parallel \leq \parallel \mathbf{x}{\parallel }_{p} + \parallel \mathbf{y}{\parallel }_{p} \]\n\n\( \left( {p - \frac{p}{{p}^{\prime }} = p\left( {1 - \frac{1}{{p}^{\prime }}}\right) = p\frac{1}{p} = 1}\right) \) . \( \blacksquare \)
Yes
Theorem 14.1.5 The following holds.\n\n\[ \parallel A{\parallel }_{p} \leq {\left( \mathop{\sum }\limits_{k}{\left( \mathop{\sum }\limits_{j}{\left| {A}_{jk}\right| }^{p}\right) }^{q/p}\right) }^{1/q} \]
Proof: Let \( \parallel \mathbf{x}{\parallel }_{p} \leq 1 \) and let \( A = \left( {{\mathbf{a}}_{1},\cdots ,{\mathbf{a}}_{n}}\right) \) where the \( {\mathbf{a}}_{k} \) are the columns of \( A \) . Then\n\n\[ A\mathbf{x} = \left( {\mathop{\sum }\limits_{k}{x}_{k}{\mathbf{a}}_{k}}\right) \]\n\nand so by Holder's inequality,\n\n\[ \parallel A\mathbf{x}{\parallel }_{p} \equiv {\begin{Vmatrix}{\begin{Vmatrix}\mathop{\sum }\limits_{k}{x}_{k}{\mathbf{a}}_{k}\end{Vmatrix}}_{p} \leq \mathop{\sum }\limits_{k}\left| {x}_{k}\right| \begin{Vmatrix}{\mathbf{a}}_{k}\end{Vmatrix}\end{Vmatrix}}_{p} \]\n\n\[ \leq {\left( \mathop{\sum }\limits_{k}{\left| {x}_{k}\right| }^{p}\right) }^{1/p}{\left( \mathop{\sum }\limits_{k}{\begin{Vmatrix}{\mathbf{a}}_{k}\end{Vmatrix}}_{p}^{q}\right) }^{1/q} \]\n\n\[ \leq {\left( \mathop{\sum }\limits_{k}{\left( \mathop{\sum }\limits_{j}{\left| {A}_{jk}\right| }^{p}\right) }^{q/p}\right) }^{1/q}\blacksquare \]
Yes
Lemma 14.2.1 Let \( A, B \in \mathcal{L}\left( {X, X}\right) \) where \( X \) is a normed vector space as above. Then for \( \parallel \cdot \parallel \) denoting the operator norm,\n\n\[ \parallel {AB}\parallel \leq \parallel A\parallel \parallel B\parallel \]
Proof: This follows from the definition. Letting \( \parallel x\parallel \leq 1 \), it follows from Theorem\n\n14.0.10\n\n\[ \left| \right| {ABx}\left| \right| \leq \left| \right| A\left| \right| \left| \right| {Bx}\left| \right| \leq \left| \right| A\left| \right| \left| \right| B\left| \right| \left| \right| x\left| \right| \leq \left| \right| A\left| \right| \left| \right| B\left| \right| \]\n\nand so\n\n\[ \parallel {AB}\parallel \equiv \mathop{\sup }\limits_{{\parallel x\parallel \leq 1}}\parallel {ABx}\parallel \leq \parallel A\parallel \parallel B\parallel \]
Yes
Lemma 14.2.2 Let \( A, B \in \mathcal{L}\left( {X, X}\right) ,{A}^{-1} \in \mathcal{L}\left( {X, X}\right) \), and suppose \( \parallel B\parallel < 1/\begin{Vmatrix}{A}^{-1}\end{Vmatrix} \) .\n\nThen \( {\left( A + B\right) }^{-1} \) exists and\n\n\[ \left| \left| {\left( A + B\right) }^{-1}\right| \right| \leq \left| \right| {A}^{-1}\left| \right| \left| \frac{1}{1 - \left| \right| {A}^{-1}B\left| \right| }\right| . \]\n\nThe above formula makes sense because \( \begin{Vmatrix}{{A}^{-1}B}\end{Vmatrix} < 1 \) .
Proof: By Lemma [14.2.1,\n\n\[ \left| \right| {A}^{-1}B\left| \right| \leq \left| \right| {A}^{-1}\left| \right| \left| \right| B\left| \right| < \left| \right| {A}^{-1}\left| \right| \frac{1}{\left| \right| }\frac{1}{\left| \right| } = 1 \]\n\nSuppose \( \left( {A + B}\right) x = 0 \) . Then \( 0 = A\left( {I + {A}^{-1}B}\right) x \) and so since \( A \) is one to one, \( \left( {I + {A}^{-1}B}\right) x = 0 \) . Therefore,\n\n\[ 0 = \left| \right| \left( {I + {A}^{-1}B}\right) x\left| \right| \geq \parallel x\parallel - \left| \right| {A}^{-1}{Bx}\left| \right| \]\n\n\[ \geq \parallel x\parallel - \begin{Vmatrix}{{A}^{-1}B}\end{Vmatrix}\parallel x\parallel = \left( {1 - \begin{Vmatrix}{{A}^{-1}B}\end{Vmatrix}}\right) \parallel x\parallel > 0 \]\n\na contradiction. This also shows \( \left( {I + {A}^{-1}B}\right) \) is one to one. Therefore, both \( {\left( A + B\right) }^{-1} \) and \( {\left( I + {A}^{-1}B}\right) }^{-1} \) are in \( \mathcal{L}\left( {X, X}\right) \) . Hence\n\n\[ {\left( A + B\right) }^{-1} = {\left( A\left( I + {A}^{-1}B\right) \right) }^{-1} = {\left( I + {A}^{-1}B}\right) }^{-1}{A}^{-1} \]\n\nNow if\n\n\[ x = {\left( I + {A}^{-1}B}\right) }^{-1}y \]\n\nfor \( \parallel y\parallel \leq 1 \), then\n\n\[ \left( {I + {A}^{-1}B}\right) x = y \]\n\nand so\n\n\[ \parallel x\parallel \left( {1 - \begin{Vmatrix}{{A}^{-1}B}\end{Vmatrix}}\right) \leq \begin{Vmatrix}{x + {A}^{-1}{Bx}}\end{Vmatrix} \leq \parallel y\parallel = 1 \]\n\nand so\n\n\[ \parallel x\parallel = \begin{Vmatrix}{{\left( I + {A}^{-1}B}\right) }^{-1}y}\end{Vmatrix} \leq \frac{1}{1 - \begin{Vmatrix}{{A}^{-1}B}\end{Vmatrix}} \]\n\nSince \( \parallel y\parallel \leq 1 \) is arbitrary, this shows\n\n\[ \begin{Vmatrix}{\left( I + {A}^{-1}B}\right) }^{-1}\end{Vmatrix} \leq \frac{1}{1 - \begin{Vmatrix}{{A}^{-1}B}\end{Vmatrix}} \]\n\nTherefore,\n\n\[ \begin{Vmatrix}{\left( A + B\right) }^{-1}\end{Vmatrix} = \begin{Vmatrix}{{\left( I + {A}^{-1}B}\right) }^{-1}{A}^{-1}}\end{Vmatrix} \]\n\n\[ \leq \begin{Vmatrix}{A}^{-1}\end{Vmatrix}\begin{Vmatrix}{\left( I + {A}^{-1}B}\right) }^{-1}\end{Vmatrix} \leq \begin{Vmatrix}{A}^{-1}\end{Vmatrix}\frac{1}{1 - \begin{Vmatrix}{{A}^{-1}B}\end{Vmatrix}}\blacksquare \]
Yes
Proposition 14.2.3 Suppose \( A \) is invertible, \( b \neq 0,{Ax} = b \), and \( {A}_{1}{x}_{1} = {b}_{1} \) where \( \begin{Vmatrix}{A - {A}_{1}}\end{Vmatrix} < 1/\begin{Vmatrix}{A}^{-1}\end{Vmatrix} \) . Then\n\n\[ \n\frac{\begin{Vmatrix}{x}_{1} - x\end{Vmatrix}}{\parallel x\parallel } \leq \frac{1}{\left( 1 - \begin{Vmatrix}{A}^{-1}\left( {A}_{1} - A\right) \end{Vmatrix}\right) }\parallel A\parallel \begin{Vmatrix}{A}^{-1}\end{Vmatrix}\left( {\frac{\begin{Vmatrix}{A}_{1} - A\end{Vmatrix}}{\parallel A\parallel } + \frac{\begin{Vmatrix}b - {b}_{1}\end{Vmatrix}}{\parallel b\parallel }}\right) .\n\]
Proof: It follows from the assumptions that\n\n\[ \n{Ax} - {A}_{1}x + {A}_{1}x - {A}_{1}{x}_{1} = b - {b}_{1}.\n\]\n\nHence\n\n\[ \n{A}_{1}\left( {x - {x}_{1}}\right) = \left( {{A}_{1} - A}\right) x + b - {b}_{1}.\n\]\n\nNow \( {A}_{1} = \left( {A + \left( {{A}_{1} - A}\right) }\right) \) and so by the above lemma, \( {A}_{1}^{-1} \) exists and so\n\n\[ \n\left( {x - {x}_{1}}\right) = {A}_{1}^{-1}\left( {{A}_{1} - A}\right) x + {A}_{1}^{-1}\left( {b - {b}_{1}}\right)\n\]\n\n\[ \n= {\left( A + \left( {A}_{1} - A\right) \right) }^{-1}\left( {{A}_{1} - A}\right) x + {\left( A + \left( {A}_{1} - A\right) \right) }^{-1}\left( {b - {b}_{1}}\right) .\n\]\n\nBy the estimate in Lemma 14.2.2,\n\n\[ \n\begin{Vmatrix}{x - {x}_{1}}\end{Vmatrix} \leq \frac{\begin{Vmatrix}{A}^{-1}\end{Vmatrix}}{1 - \begin{Vmatrix}{{A}^{-1}\left( {{A}_{1} - A}\right) }\end{Vmatrix}}\left( {\begin{Vmatrix}{{A}_{1} - A}\end{Vmatrix}\parallel x\parallel + \begin{Vmatrix}{b - {b}_{1}}\end{Vmatrix}}\right) .\n\]\n\nDividing by \( \parallel x\parallel \),\n\n\[ \n\frac{\begin{Vmatrix}x - {x}_{1}\end{Vmatrix}}{\parallel x\parallel } \leq \frac{\begin{Vmatrix}{A}^{-1}\end{Vmatrix}}{1 - \begin{Vmatrix}{{A}^{-1}\left( {{A}_{1} - A}\right) }\end{Vmatrix}}\left( {\begin{Vmatrix}{{A}_{1} - A}\end{Vmatrix} + \frac{\begin{Vmatrix}b - {b}_{1}\end{Vmatrix}}{\parallel x\parallel }}\right)\n\]\n\n(14.7)\n\nNow \( b = {Ax} = A\left( {{A}^{-1}b}\right) \) and so \( \parallel b\parallel \leq \parallel A\parallel \begin{Vmatrix}{{A}^{-1}b}\end{Vmatrix} \) and so\n\n\[ \n\parallel x\parallel = \begin{Vmatrix}{{A}^{-1}b}\end{Vmatrix} \geq \parallel b\parallel /\parallel A\parallel\n\]\n\nTherefore, from (14.7),\n\n\[ \n\frac{\begin{Vmatrix}x - {x}_{1}\end{Vmatrix}}{\parallel x\parallel } \leq \frac{\begin{Vmatrix}{A}^{-1}\end{Vmatrix}}{1 - \begin{Vmatrix}{{A}^{-1}\left( {{A}_{1} - A}\right) }\end{Vmatrix}}\left( {\frac{\parallel A\parallel \begin{Vmatrix}{{A}_{1} - A}\end{Vmatrix}}{\parallel A\parallel } + \frac{\parallel A\parallel \begin{Vmatrix}{b - {b}_{1}}\end{Vmatrix}}{\parallel b\parallel }}\right)\n\]\n\n\[ \n\leq \frac{\begin{Vmatrix}{A}^{-1}\end{Vmatrix}\parallel A\parallel }{1 - \begin{Vmatrix}{{A}^{-1}\left( {{A}_{1} - A}\right) }\end{Vmatrix}}\left( {\frac{\begin{Vmatrix}{A}_{1} - A\end{Vmatrix}}{\parallel A\parallel } + \frac{\begin{Vmatrix}b - {b}_{1}\end{Vmatrix}}{\parallel b\parallel }}\right)\n\]\n\nwhich proves the proposition. -
Yes
Lemma 14.3.2 Let \( J \) be a \( p \times p \) Jordan matrix\n\n\[ J = \left( \begin{array}{lll} {J}_{1} & & \\ & \ddots & \\ & & {J}_{s} \end{array}\right) \]\n\nwhere each \( {J}_{k} \) is of the form\n\n\[ {J}_{k} = {\lambda }_{k}I + {N}_{k} \]\n\nin which \( {N}_{k} \) is a nilpotent matrix having zeros down the main diagonal and ones down the super diagonal. Then\n\n\[ \mathop{\lim }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}{J}^{n}\end{Vmatrix}}^{1/n} = \rho \]\n\nwhere \( \rho = \max \left\{ {\left| {\lambda }_{k}\right|, k = 1,\ldots, n}\right\} \) . Here the norm is defined to equal\n\n\[ \parallel B\parallel = \max \left\{ {\left| {B}_{ij}\right|, i, j}\right\} . \]
Proof: Suppose first that \( \rho \neq 0 \) . First note that for this norm, if \( B, C \) are \( p \times p \) matrices,\n\n\[ \parallel {BC}\parallel \leq p\parallel B\parallel \parallel C\parallel \]\n\nwhich follows from a simple computation. Now\n\n\[ {\begin{Vmatrix}{J}^{n}\end{Vmatrix}}^{1/n} = {\begin{Vmatrix}\left( \begin{matrix} {\left( {\lambda }_{1}I + {N}_{1}\right) }^{n} & & \\ & \ddots & \\ & & {\left( {\lambda }_{s}I + {N}_{s}\right) }^{n} \end{matrix}\right) \end{Vmatrix}}^{1/n} \]\n\n\[ = \rho {\begin{Vmatrix}\left( \begin{matrix} {\left( \frac{{\lambda }_{1}}{\rho }I + \frac{1}{\rho }{N}_{1}\right) }^{n} & & \\ & \ddots & \\ & & {\left( \frac{{\lambda }_{2}}{\rho }I + \frac{1}{\rho }{N}_{2}\right) }^{n} \end{matrix}\right) \end{Vmatrix}}^{1/n} \]\n\n(14.8)\n\nFrom the definition of \( \rho \), at least one of the \( {\lambda }_{k}/\rho \) has absolute value equal to 1 . Therefore,\n\n\[ {\begin{Vmatrix}\left( \begin{matrix} {\left( \frac{{\lambda }_{1}}{\rho }I + \frac{1}{\rho }{N}_{1}\right) }^{n} & & \\ & \ddots & \\ & & {\left( \frac{{\lambda }_{2}}{\rho }I + \frac{1}{\rho }{N}_{2}\right) }^{n} \end{matrix}\right) \end{Vmatrix}}^{1/n} - 1 \equiv {e}_{n} \geq 0 \]\n\nbecause each \( {N}_{k} \) has only zero terms on the main diagonal. Therefore, some term in the matrix has absolute value at least as large as 1 . Now also, since \( {N}_{k}^{p} = 0 \), the norm of the matrix in the above is dominated by an expression of the form \( C{n}^{p} \) where \( C \) is some constant which does not depend on \( n \) . This is because a typical block in the above matrix\n\nis of the form\n\n\[ \mathop{\sum }\limits_{{i = 1}}^{p}\left( \begin{matrix} n \\ i \end{matrix}\right) {\left( \frac{{\lambda }_{k}}{\rho }\right) }^{n - i}{N}_{k}^{i} \]\n\nand each \( \left| {\lambda }_{k}\right| \leq \rho \) .\n\nIt follows that for \( n > p + 1 \),\n\n\[ C{n}^{p} \geq {\left( 1 + {e}_{n}\right) }^{n} \geq \left( \begin{matrix} n \\ p + 1 \end{matrix}\right) {e}_{n}^{p + 1} \]\n\nand so\n\n\[ {\left( \frac{C{n}^{p}}{\left( \begin{matrix} n \\ p + 1 \end{matrix}\right) }\right) }^{1/\left( {p + 1}\right) } \geq {e}_{n} \geq 0 \]\n\nTherefore, \( \mathop{\lim }\limits_{{n \rightarrow \infty }}{e}_{n} = 0 \) . It follows from [14.8] that the expression in the norms in this equation converges to 1 and so\n\n\[ \mathop{\lim }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}{J}^{n}\end{Vmatrix}}^{1/n} = \rho \]\n\nIn case \( \rho = 0 \) so that all the eigenvalues equal zero, it follows that \( {J}^{n} = 0 \) for all \( n > p \) . Therefore, the limit still exists and equals \( \rho \) .
Yes
Theorem 14.3.3 (Gelfand) Let \( A \) be a complex \( p \times p \) matrix. Then if \( \rho \) is the absolute value of its largest eigenvalue, \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}{A}^{n}\end{Vmatrix}}^{1/n} = \rho \] Here \( \parallel \cdot \parallel \) is any norm on \( \mathcal{L}\left( {{\mathbb{C}}^{n},{\mathbb{C}}^{n}}\right) \) .
Proof: First assume \( \parallel \cdot \parallel \) is the special norm of the above lemma. Then letting \( J \) denote the Jordan form of \( A,{S}^{-1}{AS} = J \), it follows from Lemma 14.3.2 \[ \lim \mathop{\sup }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}{A}^{n}\end{Vmatrix}}^{1/n} = \lim \mathop{\sup }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}S{J}^{n}{S}^{-1}\end{Vmatrix}}^{1/n} \] \[ \leq \lim \mathop{\sup }\limits_{{n \rightarrow \infty }}{\left( \left( {p}^{2}\right) \parallel S\parallel \begin{Vmatrix}{S}^{-1}\end{Vmatrix}\right) }^{1/n}{\begin{Vmatrix}{J}^{n}\end{Vmatrix}}^{1/n} = \rho \] \[ = \lim \mathop{\inf }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}{J}^{n}\end{Vmatrix}}^{1/n} = \lim \mathop{\inf }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}{S}^{-1}{A}^{n}S\end{Vmatrix}}^{n} \] \[ = \lim \mathop{\inf }\limits_{{n \rightarrow \infty }}{\left( \left( {p}^{2}\right) \parallel S\parallel \left| \right| {S}^{-1}\parallel \right) }^{1/n}{\begin{Vmatrix}{A}^{n}\end{Vmatrix}}^{1/n} = \lim \mathop{\inf }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}{A}^{n}\end{Vmatrix}}^{1/n} \] If follows that \( \mathop{\liminf }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}{A}^{n}\end{Vmatrix}}^{1/n} = \mathop{\limsup }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}{A}^{n}\end{Vmatrix}}^{1/n} = \mathop{\lim }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}{A}^{n}\end{Vmatrix}}^{1/n} = \rho \) . Now by equivalence of norms, if \( \parallel \mid \cdot \parallel \mid \) is any other norm for the set of complex \( p \times p \) matrices, there exist constants \( \delta ,\Delta \) such that \[ \delta \begin{Vmatrix}{A}^{n}\end{Vmatrix} \leq \begin{Vmatrix}\left| {A}^{n}\right| \end{Vmatrix} \leq \Delta \begin{Vmatrix}{A}^{n}\end{Vmatrix} \] Then raising to the \( 1/n \) power and taking a limit, \[ \rho \leq \lim \mathop{\inf }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}\left| {A}^{n}\right| \end{Vmatrix}}^{1/n} \leq \lim \mathop{\sup }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}\left| {A}^{n}\right| \end{Vmatrix}}^{1/n} \leq \rho \]
Yes
Consider \( \left( \begin{matrix} 9 & - 1 & 2 \\ - 2 & 8 & 4 \\ 1 & 1 & 8 \end{matrix}\right) \) . Estimate the absolute value of the largest eigenvalue.
A laborious computation reveals the eigenvalues are 5 , and 10 . Therefore, the right answer in this case is 10 . Consider \( {\begin{Vmatrix}{A}^{7}\end{Vmatrix}}^{1/7} \) where the norm is obtained by taking the maximum of all the absolute values of the entries. Thus\n\n\[ \n{\left( \begin{matrix} 9 & - 1 & 2 \\ - 2 & 8 & 4 \\ 1 & 1 & 8 \end{matrix}\right) }^{7} = \left( \begin{matrix} {8015625} & - {1984375} & {3968750} \\ - {3968750} & {6031250} & {7937500} \\ {1984375} & {1984375} & {6031250} \end{matrix}\right) \n\]\n\nand taking the seventh root of the largest entry gives\n\n\[ \n\rho \left( A\right) \approx {8015625}^{1/7} = {9.68895123671}. \n\]
Yes
Lemma 14.4.2 Suppose \( {\left\{ {A}_{k}\right\} }_{k = 1}^{\infty } \) is a sequence in \( \mathcal{L}\left( {X, Y}\right) \) where \( X, Y \) are finite dimensional normed linear spaces. Then if\n\n\[ \mathop{\sum }\limits_{{k = 1}}^{\infty }\begin{Vmatrix}{A}_{k}\end{Vmatrix} < \infty \]\n\nIt follows that\n\n\[ \mathop{\sum }\limits_{{k = 1}}^{\infty }{A}_{k} \]\n\n(14.9)\n\nexists. In words, absolute convergence implies convergence.
Proof: For \( p \leq m \leq n \) ,\n\n\[ \left| \left| {\mathop{\sum }\limits_{{k = 1}}^{n}{A}_{k} - \mathop{\sum }\limits_{{k = 1}}^{m}{A}_{k}}\right| \right| \leq \mathop{\sum }\limits_{{k = p}}^{\infty }\begin{Vmatrix}{A}_{k}\end{Vmatrix} \]\n\nand so for \( p \) large enough, this term on the right in the above inequality is less than \( \varepsilon \) . Since \( \varepsilon \) is arbitrary, this shows the partial sums of [14.9] are a Cauchy sequence. Therefore by Corollary 14.0.7 it follows that these partial sums converge. -
Yes
Lemma 14.4.6 Let \( \\left\\{ {a}_{p}\\right\\} \) be a sequence of nonnegative terms and let\n\n\[ r = \\lim \\mathop{\\sup }\\limits_{{p \\rightarrow \\infty }}{a}_{p}^{1/p} \]\n\nThen if \( r < 1 \), it follows the series, \( \\mathop{\\sum }\\limits_{{k = 1}}^{\\infty }{a}_{k} \) converges and if \( r > 1 \), then \( {a}_{p} \) fails to converge to 0 so the series diverges. If \( A \) is an \( n \\times n \) matrix and\n\n\[ 1 < \\mathop{\\limsup }\\limits_{{p \\rightarrow \\infty }}{\\begin{Vmatrix}{A}^{p}\\end{Vmatrix}}^{1/p} \]\n\n(14.10)\n\nthen \( \\mathop{\\sum }\\limits_{{k = 0}}^{\\infty }{A}^{k} \) fails to converge.
Proof: Suppose \( r < 1 \) . Then there exists \( N \) such that if \( p > N \),\n\n\[ {a}_{p}^{1/p} < R \]\n\nwhere \( r < R < 1 \) . Therefore, for all such \( p,{a}_{p} < {R}^{p} \) and so by comparison with the geometric series, \( \\sum {R}^{p} \), it follows \( \\mathop{\\sum }\\limits_{{p = 1}}^{\\infty }{a}_{p} \) converges.\n\nNext suppose \( r > 1 \) . Then letting \( 1 < R < r \), it follows there are infinitely many values of \( p \) at which\n\n\[ R < {a}_{p}^{1/p} \]\n\nwhich implies \( {R}^{p} < {a}_{p} \), showing that \( {a}_{p} \) cannot converge to 0 and so the series cannot converge either.\n\nTo see the last claim, if (14.10) holds, then from the first part of this lemma, \( \\begin{Vmatrix}{A}^{p}\\end{Vmatrix} \) fails to converge to 0 and so \( {\\left\\{ \\mathop{\\sum }\\limits_{{k = 0}}^{m}{A}^{k}\\right\\} }_{m = 0}^{\\infty } \) is not a Cauchy sequence. Hence \( \\mathop{\\sum }\\limits_{{k = 0}}^{\\infty }{A}^{k} \\equiv \) \( \\mathop{\\lim }\\limits_{{m \\rightarrow \\infty }}\\mathop{\\sum }\\limits_{{k = 0}}^{m}{A}^{k} \) cannot exist.
Yes
Lemma 14.4.7 \( \sigma \left( {A}^{p}\right) = \sigma {\left( A\right) }^{p} \)
Proof: In dealing with \( \sigma \left( {A}^{p}\right) \), is suffices to deal with \( \sigma \left( {J}^{p}\right) \) where \( J \) is the Jordan form of \( A \) because \( {J}^{p} \) and \( {A}^{p} \) are similar. Thus if \( \lambda \in \sigma \left( {A}^{p}\right) \), then \( \lambda \in \sigma \left( {J}^{p}\right) \) and so \( \lambda = \alpha \) where \( \alpha \) is one of the entries on the main diagonal of \( {J}^{p} \) . These entries are of the form \( {\lambda }^{p} \) where \( \lambda \in \sigma \left( A\right) \) . Thus \( \lambda \in \sigma {\left( A\right) }^{p} \) and this shows \( \sigma \left( {A}^{p}\right) \subseteq \sigma {\left( A\right) }^{p} \) . Now take \( \alpha \in \sigma \left( A\right) \) and consider \( {\alpha }^{p} \). \[ {\alpha }^{p}I - {A}^{p} = \left( {{\alpha }^{p - 1}I + \cdots + \alpha {A}^{p - 2} + {A}^{p - 1}}\right) \left( {{\alpha I} - A}\right) \] and so \( {\alpha }^{p}I - {A}^{p} \) fails to be one to one which shows that \( {\alpha }^{p} \in \sigma \left( {A}^{p}\right) \) which shows that \( \sigma {\left( A\right) }^{p} \subseteq \sigma \left( {A}^{p}\right) \) . \( \blacksquare \)
Yes
Use the Gauss Seidel method to solve the system\n\n\[ \left( \begin{array}{llll} 3 & 1 & 0 & 0 \\ 1 & 4 & 1 & 0 \\ 0 & 2 & 5 & 1 \\ 0 & 0 & 2 & 4 \end{array}\right) \left( \begin{array}{l} {x}_{1} \\ {x}_{2} \\ {x}_{3} \\ {x}_{4} \end{array}\right) = \left( \begin{array}{l} 1 \\ 2 \\ 3 \\ 4 \end{array}\right) \]
In terms of matrices, this procedure is\n\n\[ \left( \begin{array}{llll} 3 & 0 & 0 & 0 \\ 1 & 4 & 0 & 0 \\ 0 & 2 & 5 & 0 \\ 0 & 0 & 2 & 4 \end{array}\right) \left( \begin{array}{l} {x}_{1}^{r + 1} \\ {x}_{2}^{r + 1} \\ {x}_{3}^{r + 1} \\ {x}_{4}^{r + 1} \end{array}\right) = - \left( \begin{array}{llll} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \end{array}\right) \left( \begin{array}{l} {x}_{1}^{r} \\ {x}_{2}^{r} \\ {x}_{3}^{r} \\ {x}_{4}^{r} \end{array}\right) + \left( \begin{array}{l} 1 \\ 2 \\ 3 \\ 4 \end{array}\right) . \]\n\nMultiplying by the inverse of the matrix on the left \( {}^{▱} \) this yields\n\n\[ \left( \begin{matrix} {x}_{1}^{r + 1} \\ {x}_{2}^{r + 1} \\ {x}_{3}^{r + 1} \\ {x}_{4}^{r + 1} \end{matrix}\right) = - \left( \begin{matrix} 0 & \frac{1}{3} & 0 & 0 \\ 0 & - \frac{1}{12} & \frac{1}{4} & 0 \\ 0 & \frac{1}{30} & - \frac{1}{10} & \frac{1}{5} \\ 0 & - \frac{1}{60} & \frac{1}{20} & - \frac{1}{10} \end{matrix}\right) \left( \begin{matrix} {x}_{1}^{r} \\ {x}_{2}^{r} \\ {x}_{3}^{r} \\ {x}_{4}^{r} \end{matrix}\right) + \left( \begin{matrix} \frac{1}{3} \\ \frac{5}{12} \\ \frac{10}{30} \\ \frac{17}{60} \end{matrix}\right) \]\n\nAs before, I will be totally unoriginal in the choice of \( {\mathbf{x}}^{1} \) . Let it equal the zero vector. Therefore,\n\n\[ {x}^{2} = \left( \begin{matrix} \frac{1}{3} \\ \frac{5}{12} \\ \frac{13}{30} \\ \frac{47}{60} \end{matrix}\right) . \]
Yes
Lemma 14.6.4 Suppose \( T : E \rightarrow E \) where \( E \) is a Banach space with norm \( \left| \cdot \right| \) . Also suppose\n\n\[ \left| {T\mathbf{x} - T\mathbf{y}}\right| \leq r\left| {\mathbf{x} - \mathbf{y}}\right| \]\n\nfor some \( r \in \left( {0,1}\right) \) . Then there exists a unique fixed point, \( \mathbf{x} \in E \) such that\n\n\[ T\mathbf{x} = \mathbf{x}. \]
Proof: This follows easily when it is shown that the above sequence, \( {\left\{ {T}^{k}{\mathbf{x}}^{1}\right\} }_{k = 1}^{\infty } \) is a Cauchy sequence. Note that\n\n\[ \left| {{T}^{2}{\mathbf{x}}^{1} - T{\mathbf{x}}^{1}}\right| \leq r\left| {T{\mathbf{x}}^{1} - {\mathbf{x}}^{1}}\right| \]\n\nSuppose\n\n\[ \left| {{T}^{k}{\mathbf{x}}^{1} - {T}^{k - 1}{\mathbf{x}}^{1}}\right| \leq {r}^{k - 1}\left| {T{\mathbf{x}}^{1} - {\mathbf{x}}^{1}}\right| . \]\n\nThen\n\n\[ \left| {{T}^{k + 1}{\mathbf{x}}^{1} - {T}^{k}{\mathbf{x}}^{1}}\right| \leq r\left| {{T}^{k}{\mathbf{x}}^{1} - {T}^{k - 1}{\mathbf{x}}^{1}}\right| \]\n\n\[ \leq r{r}^{k - 1}\left| {T{\mathbf{x}}^{1} - {\mathbf{x}}^{1}}\right| = {r}^{k}\left| {T{\mathbf{x}}^{1} - {\mathbf{x}}^{1}}\right| . \]\n\nBy induction, this shows that for all \( k \geq 2,{14.24} \) is valid. Now let \( k > l \geq N \) .\n\n\[ \left| {{T}^{k}{\mathbf{x}}^{1} - {T}^{l}{\mathbf{x}}^{1}}\right| = \left| {\mathop{\sum }\limits_{{j = l}}^{{k - 1}}\left( {{T}^{j + 1}{\mathbf{x}}^{1} - {T}^{j}{\mathbf{x}}^{1}}\right) }\right| \leq \mathop{\sum }\limits_{{j = l}}^{{k - 1}}\left| {{T}^{j + 1}{\mathbf{x}}^{1} - {T}^{j}{\mathbf{x}}^{1}}\right| \]\n\n\[ \leq \mathop{\sum }\limits_{{j = N}}^{{k - 1}}{r}^{j}\left| {T{\mathbf{x}}^{1} - {\mathbf{x}}^{1}}\right| \leq \left| {T{\mathbf{x}}^{1} - {\mathbf{x}}^{1}}\right| \frac{{r}^{N}}{1 - r} \]\n\nwhich converges to 0 as \( N \rightarrow \infty \) . Therefore, this is a Cauchy sequence so it must converge to \( \mathbf{x} \in E \) . Then\n\n\[ \mathbf{x} = \mathop{\lim }\limits_{{k \rightarrow \infty }}{T}^{k}{\mathbf{x}}^{1} = \mathop{\lim }\limits_{{k \rightarrow \infty }}{T}^{k + 1}{\mathbf{x}}^{1} = T\mathop{\lim }\limits_{{k \rightarrow \infty }}{T}^{k}{\mathbf{x}}^{1} = T\mathbf{x}. \]\n\nThis shows the existence of the fixed point. To show it is unique, suppose there were another one, \( \mathbf{y} \) . Then\n\n\[ \left| {\mathbf{x} - \mathbf{y}}\right| = \left| {T\mathbf{x} - T\mathbf{y}}\right| \leq r\left| {\mathbf{x} - \mathbf{y}}\right| \]\n\nand so \( \mathbf{x} = \mathbf{y} \) .\n\nIt remains to verify the estimate.\n\n\[ \left| {{\mathbf{x}}^{1} - \mathbf{x}}\right| \leq \left| {{\mathbf{x}}^{1} - T{\mathbf{x}}^{1}}\right| + \left| {T{\mathbf{x}}^{1} - \mathbf{x}}\right| = \left| {{\mathbf{x}}^{1} - T{\mathbf{x}}^{1}}\right| + \left| {T{\mathbf{x}}^{1} - T\mathbf{x}}\right| \]\n\n\[ \leq \left| {{\mathbf{x}}^{1} - T{\mathbf{x}}^{1}}\right| + r\left| {{\mathbf{x}}^{1} - \mathbf{x}}\right| \]\n\nand solving the inequality for \( \left| {{\mathbf{x}}^{1} - \mathbf{x}}\right| \) gives the estimate desired.
Yes
Corollary 14.6.5 Suppose \( T : E \rightarrow E \), for some constant \( C \)\n\n\[ \left| {T\mathbf{x} - T\mathbf{y}}\right| \leq C\left| {\mathbf{x} - \mathbf{y}}\right| \]\n\nfor all \( \mathbf{x},\mathbf{y} \in E \), and for some \( N \in \mathbb{N} \), \n\n\[ \left| {{T}^{N}\mathbf{x} - {T}^{N}\mathbf{y}}\right| \leq r\left| {\mathbf{x} - \mathbf{y}}\right| \]\n\nfor all \( \mathbf{x},\mathbf{y} \in E \) where \( r \in \left( {0,1}\right) \) . Then there exists a unique fixed point for \( T \) and it is still the limit of the sequence, \( \left\{ {{T}^{k}{\mathbf{x}}^{1}}\right\} \) for any choice of \( {\mathbf{x}}^{1} \) .
Proof: From Lemma 14.6.4 there exists a unique fixed point for \( {T}^{N} \) denoted here as \( \mathbf{x} \) . Therefore, \( {T}^{N}\mathbf{x} = \mathbf{x} \) . Now doing \( T \) to both sides, \n\n\[ {T}^{N}T\mathbf{x} = T\mathbf{x} \]\n\nBy uniqueness, \( T\mathbf{x} = \mathbf{x} \) because the above equation shows \( T\mathbf{x} \) is a fixed point of \( {T}^{N} \) and there is only one fixed point of \( {T}^{N} \) . In fact, there is only one fixed point of \( T \) because a fixed point of \( T \) is automatically a fixed point of \( {T}^{N} \) .\n\nIt remains to show \( {T}^{k}{\mathbf{x}}^{1} \rightarrow \mathbf{x} \), the unique fixed point of \( {T}^{N} \) . If this does not happen, there exists \( \varepsilon > 0 \) and a subsequence, still denoted by \( {T}^{k} \) such that \n\n\[ \left| {{T}^{k}{\mathbf{x}}^{1} - \mathbf{x}}\right| \geq \varepsilon \]\n\nNow \( k = {j}_{k}N + {r}_{k} \) where \( {r}_{k} \in \{ 0,\cdots, N - 1\} \) and \( {j}_{k} \) is a positive integer such that \( \mathop{\lim }\limits_{{k \rightarrow \infty }}{j}_{k} = \infty \) . Then there exists a single \( r \in \{ 0,\cdots, N - 1\} \) such that for infinitely many \( k,{r}_{k} = r \) . Taking a further subsequence, still denoted by \( {T}^{k} \) it follows \n\n\[ \left| {{T}^{{j}_{k}N + r}{\mathbf{x}}^{1} - \mathbf{x}}\right| \geq \varepsilon \]\n\n(14.25)\n\nHowever, \n\n\[ {T}^{{j}_{k}N + r}{\mathbf{x}}^{1} = {T}^{r}{T}^{{j}_{k}N}{\mathbf{x}}^{1} \rightarrow {T}^{r}\mathbf{x} = \mathbf{x} \]\n\nand this contradicts (14.25).
Yes
Theorem 14.6.6 Suppose \( \rho \left( {{B}^{-1}C}\right) < 1 \) . Then the iterates in [14.18] converge to the unique solution of (14.17).
Proof: Consider the iterates in (14.18). Let \( T\mathbf{x} = {B}^{-1}C\mathbf{x} + \mathbf{b} \) . Then\n\n\[ \left| {{T}^{k}\mathbf{x} - {T}^{k}\mathbf{y}}\right| = \left| {{\left( {B}^{-1}C\right) }^{k}\mathbf{x} - {\left( {B}^{-1}C\right) }^{k}\mathbf{y}}\right| \leq \begin{Vmatrix}{\left( {B}^{-1}C\right) }^{k}\end{Vmatrix}\left| {\mathbf{x} - \mathbf{y}}\right| .\n\]\n\nHere \( \parallel \cdot \parallel \) refers to any of the operator norms. It doesn’t matter which one you pick because they are all equivalent. I am writing the proof to indicate the operator norm taken with respect to the usual norm on \( E \) . Since \( \rho \left( {{B}^{-1}C}\right) < 1 \), it follows from Gelfand’s theorem, Theorem 14.3.3 on Page 349, there exists \( N \) such that if \( k \geq N \), then for some \( {r}^{1/k} < 1 \),\n\n\[ {\left. \left| {\left( {B}^{-1}C\right) }^{k}\right| \right| }^{1/k} < {r}^{1/k} < 1.\n\]\n\nConsequently,\n\n\[ \left| {{T}^{N}\mathbf{x} - {T}^{N}\mathbf{y}}\right| \leq r\left| {\mathbf{x} - \mathbf{y}}\right| \]\n\nAlso \( \left| {T\mathbf{x} - T\mathbf{y}}\right| \leq \left| \right| {B}^{-1}C\left| \right| \left| {\mathbf{x} - \mathbf{y}}\right| \) and so Corollary 14.6.5 applies and gives the conclusion of this theorem. \( \blacksquare \)
Yes
Find the largest eigenvalue of \( A = \left( \begin{matrix} 5 & - {14} & {11} \\ - 4 & 4 & - 4 \\ 3 & 6 & - 3 \end{matrix}\right) \) .
You can begin with \( {\mathbf{u}}_{1} = {\left( 1,\cdots ,1\right) }^{T} \) and apply the above procedure. However, you can accelerate the process if you begin with \( {A}^{n}{\mathbf{u}}_{1} \) and then divide by the largest entry to get the first approximate eigenvector. Thus\n\n\[ \n{\left( \begin{matrix} 5 & - {14} & {11} \\ - 4 & 4 & - 4 \\ 3 & 6 & - 3 \end{matrix}\right) }^{20}\left( \begin{array}{l} 1 \\ 1 \\ 1 \end{array}\right) = \left( \begin{matrix} {2.5558} \times {10}^{21} \\ - {1.2779} \times {10}^{21} \\ - {3.6562} \times {10}^{15} \end{matrix}\right) \n\]\n\nDivide by the largest entry to obtain a good aproximation.\n\n\[ \n\left( \begin{matrix} {2.5558} \times {10}^{21} \\ - {1.2779} \times {10}^{21} \\ - {3.6562} \times {10}^{15} \end{matrix}\right) \frac{1}{{2.5558} \times {10}^{21}} = \left( \begin{matrix} {1.0} \\ - {0.5} \\ - {1.4306} \times {10}^{-6} \end{matrix}\right) \n\]\n\nNow begin with this one.\n\n\[ \n\left( \begin{matrix} 5 & - {14} & {11} \\ - 4 & 4 & - 4 \\ 3 & 6 & - 3 \end{matrix}\right) \left( \begin{matrix} {1.0} \\ - {0.5} \\ - {1.4306} \times {10}^{-6} \end{matrix}\right) = \left( \begin{matrix} {12.000} \\ - {6.0000} \\ {4.2918} \times {10}^{-6} \end{matrix}\right) \n\]\n\nDivide by 12 to get the next iterate.\n\n\[ \n\left( \begin{matrix} {12.000} \\ - {6.0000} \\ {4.2918} \times {10}^{-6} \end{matrix}\right) \frac{1}{12} = \left( \begin{matrix} {1.0} \\ - {0.5} \\ {3.5765} \times {10}^{-7} \end{matrix}\right) \n\]\n\nAnother iteration will reveal that the scaling factor is still 12. Thus this is an approximate eigenvalue. In fact, it is the largest eigenvalue and the corresponding eigenvector is\n\n\[ \n\left( \begin{matrix} {1.0} \\ - {0.5} \\ 0 \end{matrix}\right) \n\]\n\nThe process has worked very well.
Yes
Lemma 15.1.3 Let \( {\left\{ {\lambda }_{k}\right\} }_{k = 1}^{n} \) be the eigenvalues of \( A \) . If \( {\mathbf{x}}_{k} \) is an eigenvector of \( A \) for the eigenvalue \( {\lambda }_{k} \), then \( {\mathbf{x}}_{k} \) is an eigenvector for \( {\left( A - \alpha I\right) }^{-1} \) corresponding to the eigenvalue \( \frac{1}{{\lambda }_{k} - \alpha } \) . Conversely, if\n\n\[ \n{\left( A - \alpha I\right) }^{-1}\mathbf{y} = \frac{1}{\lambda - \alpha }\mathbf{y} \n\]\n\n(15.4)\n\nand \( \mathbf{y} \neq \mathbf{0} \), then \( A\mathbf{y} = \lambda \mathbf{y} \) .
Proof: Let \( {\lambda }_{k} \) and \( {\mathbf{x}}_{k} \) be as described in the statement of the lemma. Then\n\n\[ \n\left( {A - {\alpha I}}\right) {\mathbf{x}}_{k} = \left( {{\lambda }_{k} - \alpha }\right) {\mathbf{x}}_{k} \n\]\n\nand so\n\[ \n\frac{1}{{\lambda }_{k} - \alpha }{\mathbf{x}}_{k} = {\left( A - \alpha I\right) }^{-1}{\mathbf{x}}_{k} \n\]\n\nSuppose [15.4]. Then \( \mathbf{y} = \frac{1}{\lambda - \alpha }\left\lbrack {A\mathbf{y} - \alpha \mathbf{y}}\right\rbrack \) . Solving for \( A\mathbf{y} \) leads to \( A\mathbf{y} = \lambda \mathbf{y} \) . \( \blacksquare \)
Yes
Find the eigenvalue of \( A = \left( \begin{matrix} 5 & - {14} & {11} \\ - 4 & 4 & - 4 \\ 3 & 6 & - 3 \end{matrix}\right) \) which is closest to -7.
In this case the eigenvalues are -6,0 , and 12 so the correct answer is -6 for the eigenvalue. Then from the above procedure, I will start with an initial vector,\n\n\[ \n{\mathbf{u}}_{1} \equiv \left( \begin{array}{l} 1 \\ 1 \\ 1 \end{array}\right) \n\]\n\nThen I must solve the following equation.\n\n\[ \n\left( {\left( \begin{matrix} 5 & - {14} & {11} \\ - 4 & 4 & - 4 \\ 3 & 6 & - 3 \end{matrix}\right) + 7\left( \begin{array}{lll} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right) }\right) \left( \begin{array}{l} x \\ y \\ z \end{array}\right) = \left( \begin{array}{l} 1 \\ 1 \\ 1 \end{array}\right) \n\]\n\nSimplifying the matrix on the left, I must solve\n\n\[ \n\left( \begin{matrix} {12} & - {14} & {11} \\ - 4 & {11} & - 4 \\ 3 & 6 & 4 \end{matrix}\right) \left( \begin{array}{l} x \\ y \\ z \end{array}\right) = \left( \begin{array}{l} 1 \\ 1 \\ 1 \end{array}\right) \n\]\n\nand then divide by the entry which has largest absolute value to obtain\n\n\[ \n{\mathbf{u}}_{2} = \left( \begin{matrix} {1.0} \\ {.184} \\ - {.76} \end{matrix}\right) \n\]\n\nNow solve\n\[ \n\left( \begin{matrix} {12} & - {14} & {11} \\ - 4 & {11} & - 4 \\ 3 & 6 & 4 \end{matrix}\right) \left( \begin{array}{l} x \\ y \\ z \end{array}\right) = \left( \begin{matrix} {1.0} \\ {.184} \\ - {.76} \end{matrix}\right) \n\]\n\nand divide by the largest entry, 1.0515 to get\n\n\[ \n{\mathbf{u}}_{3} = \left( \begin{matrix} {1.0} \\ {.0266} \\ - {.97061} \end{matrix}\right) \n\]\n\nSolve\n\[ \n\left( \begin{matrix} {12} & - {14} & {11} \\ - 4 & {11} & - 4 \\ 3 & 6 & 4 \end{matrix}\right) \left( \begin{array}{l} x \\ y \\ z \end{array}\right) = \left( \begin{matrix} {1.0} \\ {.0266} \\ - {.97061} \end{matrix}\right) \n\]\n\nand divide by the largest entry, 1.01 to get\n\n\[ \n{\mathbf{u}}_{4} = \left( \begin{matrix} {1.0} \\ {3.8454} \times {10}^{-3} \\ - {.99604} \end{matrix}\right) . \n\]\n\nThese scaling factors are pretty close after these few iterations. Therefore, the predicted eigenvalue is obtained by solving the following for \( \lambda \) .\n\n\[ \n\frac{1}{\lambda + 7} = {1.01} \n\]\n\nwhich gives \( \lambda = - {6.01} \) . You see this is pretty close. In this case the eigenvalue closest to -7 was -6 .
Yes
Corollary 15.1.8 If \( A \) is Hermitian, then all the eigenvalues of \( A \) are real and there exists an orthonormal basis of eigenvectors.
Thus for \( {\left\{ {\mathbf{x}}_{k}\right\} }_{k = 1}^{n} \) this orthonormal basis, \[ {\mathbf{x}}_{i}^{ * }{\mathbf{x}}_{j} = {\delta }_{ij} \equiv \left\{ \begin{array}{l} 1\text{ if }i = j \\ 0\text{ if }i \neq j \end{array}\right. \] Now let the eigenvalues of \( A \) be \( {\lambda }_{1} \leq {\lambda }_{2} \leq \cdots \leq {\lambda }_{n} \) and \( A{\mathbf{x}}_{k} = {\lambda }_{k}{\mathbf{x}}_{k} \) where \( {\left\{ {\mathbf{x}}_{k}\right\} }_{k = 1}^{n} \) is the above orthonormal basis of eigenvectors mentioned in the corollary. Then if \( \mathbf{x} \) is an arbitrary vector, there exist constants, \( {a}_{i} \) such that \[ \mathbf{x} = \mathop{\sum }\limits_{{i = 1}}^{n}{a}_{i}{\mathbf{x}}_{i} \] Also, \[ {\left| \mathbf{x}\right| }^{2} = \mathop{\sum }\limits_{{i = 1}}^{n}{\bar{a}}_{i}{\mathbf{x}}_{i}^{ * }\mathop{\sum }\limits_{{j = 1}}^{n}{a}_{j}{\mathbf{x}}_{j} \] \[ = \mathop{\sum }\limits_{{ij}}{\bar{a}}_{i}{a}_{j}{\mathbf{x}}_{i}^{ * }{\mathbf{x}}_{j} = \mathop{\sum }\limits_{{ij}}{\bar{a}}_{i}{a}_{j}{\delta }_{ij} = \mathop{\sum }\limits_{{i = 1}}^{n}{\left| {a}_{i}\right| }^{2}. \] Therefore, \[ \frac{{\mathbf{x}}^{ * }A\mathbf{x}}{{\left| \mathbf{x}\right| }^{2}} = \frac{\left( {\mathop{\sum }\limits_{{i = 1}}^{n}{\bar{a}}_{i}{\mathbf{x}}_{i}^{ * }}\right) \left( {\mathop{\sum }\limits_{{j = 1}}^{n}{a}_{j}{\lambda }_{j}{\mathbf{x}}_{j}}\right) }{\mathop{\sum }\limits_{{i = 1}}^{n}{\left| {a}_{i}\right| }^{2}} \] \[ = \frac{\mathop{\sum }\limits_{{ij}}{\bar{a}}_{i}{a}_{j}{\lambda }_{j}{\mathbf{x}}_{i}^{ * }{\mathbf{x}}_{j}}{\mathop{\sum }\limits_{{i = 1}}^{n}{\left| {a}_{i}\right| }^{2}} = \frac{\mathop{\sum }\limits_{{ij}}{\bar{a}}_{i}{a}_{j}{\lambda }_{j}{\delta }_{ij}}{\mathop{\sum }\limits_{{i = 1}}^{n}{\left| {a}_{i}\right| }^{2}} \] \[ = \frac{\mathop{\sum }\limits_{{i = 1}}^{n}{\left| {a}_{i}\right| }^{2}{\lambda }_{i}}{\mathop{\sum }\limits_{{i = 1}}^{n}{\left| {a}_{i}\right| }^{2}} \in \left\lbrack {{\lambda }_{1},{\lambda }_{n}}\right\rbrack \] In other words, the Rayleigh quotient is always between the largest and the smallest eigenvalues of \( A \) . When \( \mathbf{x} = {\mathbf{x}}_{n} \), the Rayleigh quotient equals the largest eigenvalue and when \( \mathbf{x} = {\mathbf{x}}_{1} \) the Rayleigh quotient equals the smallest eigenvalue.
Yes
Theorem 15.1.9 Let \( \mathbf{x} \neq \mathbf{0} \) and form the Rayleigh quotient,\n\n\[ \frac{{\mathbf{x}}^{ * }A\mathbf{x}}{{\left| \mathbf{x}\right| }^{2}} \equiv q \]\n\nThen there exists an eigenvalue of \( A \), denoted here by \( {\lambda }_{q} \) such that\n\n\[ \left| {{\lambda }_{q} - q}\right| \leq \frac{\left| A\mathbf{x} - q\mathbf{x}\right| }{\left| \mathbf{x}\right| } \]
Proof: Let \( \mathbf{x} = \mathop{\sum }\limits_{{k = 1}}^{n}{a}_{k}{\mathbf{x}}_{k} \) where \( {\left\{ {\mathbf{x}}_{k}\right\} }_{k = 1}^{n} \) is the orthonormal basis of eigenvectors.\n\n\[ {\left| A\mathbf{x} - q\mathbf{x}\right| }^{2} = {\left( A\mathbf{x} - q\mathbf{x}\right) }^{ * }\left( {A\mathbf{x} - q\mathbf{x}}\right) \]\n\n\[ = {\left( \mathop{\sum }\limits_{{k = 1}}^{n}{a}_{k}{\lambda }_{k}{\mathbf{x}}_{k} - q{a}_{k}{\mathbf{x}}_{k}\right) }^{ * }\left( {\mathop{\sum }\limits_{{k = 1}}^{n}{a}_{k}{\lambda }_{k}{\mathbf{x}}_{k} - q{a}_{k}{\mathbf{x}}_{k}}\right) \]\n\n\[ = \left( {\mathop{\sum }\limits_{{j = 1}}^{n}\left( {{\lambda }_{j} - q}\right) {\bar{a}}_{j}{\mathbf{x}}_{j}^{ * }}\right) \left( {\mathop{\sum }\limits_{{k = 1}}^{n}\left( {{\lambda }_{k} - q}\right) {a}_{k}{\mathbf{x}}_{k}}\right) \]\n\n\[ = \mathop{\sum }\limits_{{j, k}}\left( {{\lambda }_{j} - q}\right) {\bar{a}}_{j}\left( {{\lambda }_{k} - q}\right) {a}_{k}{\mathbf{x}}_{j}^{ * }{\mathbf{x}}_{k} \]\n\n\[ = \mathop{\sum }\limits_{{k = 1}}^{n}{\left| {a}_{k}\right| }^{2}{\left( {\lambda }_{k} - q\right) }^{2} \]\n\n\nNow pick the eigenvalue \( {\lambda }_{q} \) which is closest to \( q \) . Then\n\n\[ {\left| A\mathbf{x} - q\mathbf{x}\right| }^{2} = \mathop{\sum }\limits_{{k = 1}}^{n}{\left| {a}_{k}\right| }^{2}{\left( {\lambda }_{k} - q\right) }^{2} \geq {\left( {\lambda }_{q} - q\right) }^{2}\mathop{\sum }\limits_{{k = 1}}^{n}{\left| {a}_{k}\right| }^{2} = {\left( {\lambda }_{q} - q\right) }^{2}{\left| \mathbf{x}\right| }^{2} \]\n\nwhich implies (15.6). ∎
Yes
Lemma 15.2.2 Let \( A \) be an \( n \times n \) matrix and let the \( {Q}_{k} \) and \( {R}_{k} \) be as described in the algorithm. Then each \( {A}_{k} \) is unitarily similar to \( A \) and denoting by \( {Q}^{\left( k\right) } \) the product \( {Q}_{1}{Q}_{2}\cdots {Q}_{k} \) and \( {R}^{\left( k\right) } \) the product \( {R}_{k}{R}_{k - 1}\cdots {R}_{1} \), it follows that\n\n\[ \n{A}^{k} = {Q}^{\left( k\right) }{R}^{\left( k\right) } \n\]\n\n(The matrix on the left is \( A \) raised to the \( {k}^{\text{th }} \) power.)\n\n\[ \nA = {Q}^{\left( k\right) }{A}_{k}{Q}^{\left( k\right) * },{A}_{k} = {Q}^{\left( k\right) * }A{Q}^{\left( k\right) }. \n\]
Proof: From the algorithm, \( {R}_{k + 1} = {A}_{k + 1}{Q}_{k + 1}^{ * } \) and so\n\n\[ \n{A}_{k} = {Q}_{k + 1}{R}_{k + 1} = {Q}_{k + 1}{A}_{k + 1}{Q}_{k + 1}^{ * } \n\]\n\nNow iterating this, it follows\n\n\[ \n{A}_{k - 1} = {Q}_{k}{A}_{k}{Q}_{k}^{ * } = {Q}_{k}{Q}_{k + 1}{A}_{k + 1}{Q}_{k + 1}^{ * }{Q}_{k}^{ * } \n\]\n\n\[ \n{A}_{k - 2} = {Q}_{k - 1}{A}_{k - 1}{Q}_{k - 1}^{ * } = {Q}_{k - 1}{Q}_{k}{Q}_{k + 1}{A}_{k + 1}{Q}_{k + 1}^{ * }{Q}_{k}^{ * }{Q}_{k - 1}^{ * } \n\]\n\netc. Thus, after \( k - 2 \) more iterations,\n\n\[ \nA = {Q}^{\left( k + 1\right) }{A}_{k + 1}{Q}^{\left( {k + 1}\right) * } \n\]\n\nThe product of unitary matrices is unitary and so this proves the first claim of the lemma.\n\nNow consider the part about \( {A}^{k} \) . From the algorithm, this is clearly true for \( k = 1 \) . \( \left( {{A}^{1} = {QR}}\right) \) Suppose then that\n\n\[ \n{A}^{k} = {Q}_{1}{Q}_{2}\cdots {Q}_{k}{R}_{k}{R}_{k - 1}\cdots {R}_{1} \n\]\n\nWhat was just shown indicated\n\n\[ \nA = {Q}_{1}{Q}_{2}\cdots {Q}_{k + 1}{A}_{k + 1}{Q}_{k + 1}^{ * }{Q}_{k}^{ * }\cdots {Q}_{1}^{ * } \n\]\n\nand now from the algorithm, \( {A}_{k + 1} = {R}_{k + 1}{Q}_{k + 1} \) and so\n\n\[ \nA = {Q}_{1}{Q}_{2}\cdots {Q}_{k + 1}{R}_{k + 1}{Q}_{k + 1}{Q}_{k + 1}^{ * }{Q}_{k}^{ * }\cdots {Q}_{1}^{ * } \n\]\n\nThen\n\n\[ \n{A}^{k + 1} = A{A}^{k} = \n\]\n\n\[ \n\overset{A}{\overbrace{{Q}_{1}{Q}_{2}\cdots {Q}_{k + 1}{R}_{k + 1}{Q}_{k + 1}{Q}_{k + 1}^{ * }{Q}_{k}^{ * }\cdots {Q}_{1}^{ * }}}{Q}_{1}\cdots {Q}_{k}{R}_{k}{R}_{k - 1}\cdots {R}_{1} \n\]\n\n\[ \n= {Q}_{1}{Q}_{2}\cdots {Q}_{k + 1}{R}_{k + 1}{R}_{k}{R}_{k - 1}\cdots {R}_{1} \equiv {Q}^{\left( k + 1\right) }{R}^{\left( k + 1\right) }\blacksquare \n\]
Yes
Corollary 15.2.5 Let \( A \) be a real symmetric \( n \times n \) matrix having eigenvalues\n\n\[ \n{\lambda }_{1} > {\lambda }_{2} > \cdots > {\lambda }_{n} > 0 \]\n\nand let \( Q \) be defined by\n\n\[ \n{QD}{Q}^{T} = A, D = {Q}^{T}{AQ}, \]\n\n(15.14)\n\nwhere \( Q \) is orthogonal and \( D \) is a diagonal matrix having the eigenvalues on the main diagonal decreasing in size from the upper left corner to the lower right. Let \( {Q}^{T} \) have an \( {LU} \) factorization. Then in the \( {QR} \) algorithm, the matrices \( {Q}^{\left( k\right) } \) converge to \( {Q}^{\prime } \) where \( {Q}^{\prime } \) is the same as \( Q \) except having some columns multiplied by \( \left( {-1}\right) \) . Thus the columns of \( {Q}^{\prime } \) are eigenvectors of \( A \) . The matrices \( {A}_{k} \) converge to \( D \) .
Proof: This follows from Theorem 15.2.4. Here \( S = Q,{S}^{-1} = {Q}^{T} \) . Thus\n\n\[ \nQ = S = {QR} \]\n\nand \( R = I \) . By Theorem [15,2,4] and Lemma [15,2,2,\n\n\[ \n{A}_{k} = {Q}^{\left( k\right) T}A{Q}^{\left( k\right) } \rightarrow {Q}^{\prime T}A{Q}^{\prime } = {Q}^{T}{AQ} = D. \]\n\nbecause formula (15.14) is unaffected by replacing \( Q \) with \( {Q}^{\prime } \) . ∎
Yes
Here is a matrix.\n\n\[ \left( \begin{matrix} 3 & 2 & 1 \\ - 2 & 0 & - 1 \\ - 2 & - 2 & 0 \end{matrix}\right) \]\n\nIt happens that the eigenvalues of this matrix are \( 1,1 + i,1 - i \) . Lets apply the QR algorithm as if the eigenvalues were not known.
Applying the \( {QR} \) algorithm to this matrix yields the following sequence of matrices.\n\n\[ {A}_{1} = \left( \begin{matrix} {1.2353} & {1.9412} & {4.3657} \\ - {.39215} & {1.5425} & {5.3886} \times {10}^{-2} \\ - {.16169} & - {.18864} & {.22222} \end{matrix}\right) \]\n\n\[ \vdots \]\n\n\[ {A}_{12} = \left( \begin{matrix} {9.1772} \times {10}^{-2} & {.63089} & - {2.0398} \\ - {2.8556} & {1.9082} & - {3.1043} \\ {1.0786} \times {10}^{-2} & {3.4614} \times {10}^{-4} & {1.0} \end{matrix}\right) \]\n\nAt this point the bottom two terms on the left part of the bottom row are both very small so it appears the real eigenvalue is near 1.0. The complex eigenvalues are obtained\n\nfrom solving\n\[ \det \left( {\lambda \left( \begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right) - \left( \begin{matrix} {9.1772} \times {10}^{-2} & {.63089} \\ - {2.8556} & {1.9082} \end{matrix}\right) }\right) = 0 \]\n\nThis yields\n\n\[ \lambda = {1.0} - {.98828i},{1.0} + {.98828i} \]
Yes
The equation \( {x}^{4} + {x}^{3} + 4{x}^{2} + x - 2 = 0 \) has exactly two real solutions.
A matrix whose characteristic polynomial is the given polynomial is\n\n\[ \left( \begin{matrix} - 1 & - 4 & - 1 & 2 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{matrix}\right) \]\n\nUsing the \( {QR} \) algorithm yields the following sequence of iterates for \( {A}_{k} \)\n\n\[ {A}_{1} = \left( \begin{matrix} {.99999} & - {2.5927} & - {1.7588} & - {1.2978} \\ {2.1213} & - {1.7778} & - {1.6042} & - {.99415} \\ 0 & {.34246} & - {.32749} & - {.91799} \\ 0 & 0 & - {.44659} & {.10526} \end{matrix}\right) \]\n\n\[ {A}_{9} = \left( \begin{matrix} - {.83412} & - {4.1682} & - {1.939} & - {.7783} \\ {1.05} & {.14514} & {.2171} & {2.5474} \times {10}^{-2} \\ 0 & {4.0264} \times {10}^{-4} & - {.85029} & - {.61608} \\ 0 & 0 & - {1.8263} \times {10}^{-2} & {.53939} \end{matrix}\right) \]\n\nNow this is similar to \( A \) and the eigenvalues are close to the eigenvalues obtained from the two blocks on the diagonal. Of course the lower left corner of the bottom block is vanishing but it is still fairly large so the eigenvalues are approximated by the solution to\n\n\[ \det \left( {\lambda \left( \begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right) - \left( \begin{matrix} - {.85029} & - {.61608} \\ - {1.8263} \times {10}^{-2} & {.53939} \end{matrix}\right) }\right) = 0 \]\n\nThe solution to this is\n\n\[ \lambda = - {.85834},{.54744} \]\n\nand for the complex eigenvalues,\n\n\[ \det \left( {\lambda \left( \begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right) - \left( \begin{matrix} - {.83412} & - {4.1682} \\ {1.05} & {.14514} \end{matrix}\right) }\right) = 0 \]\n\nThe solution is\n\n\[ \lambda = - {.34449} - {2.0339i}, - {.34449} + {2.0339i} \]\n\nHow close are the complex eigenvalues just obtained to giving a solution to the original equation? Try \( - {.34449} + {2.0339i} \). When this is plugged in it yields\n\n\[ - {.0012} + {2.0068} \times {10}^{-4}i \]\n\nwhich is pretty close to 0 . The real eigenvalues are also very close to the corresponding real solutions to the original equation.
Yes
Example 1.2.7 Venn Diagram Examples. \( A \cap B \) is illustrated in Figure 1.2 .8 by shading the appropriate region.
![6dd34435-8451-4aec-abdd-96bf3c6137fe_20_0.jpg](images/6dd34435-8451-4aec-abdd-96bf3c6137fe_20_0.jpg)\n\nFigure 1.2.8 Venn Diagram for the Intersection of Two Sets
Yes
Example 1.3.2 Some Cartesian Products. Notation in mathematics is often developed for good reason. In this case, a few examples will make clear why the symbol \( \times \) is used for Cartesian products.\n\n- Let \( A = \{ 1,2,3\} \) and \( B = \{ 4,5\} \) . Then \( A \times B = \{ \left( {1,4}\right) ,\left( {1,5}\right) ,\left( {2,4}\right) ,\left( {2,5}\right) ,\left( {3,4}\right) ,\left( {3,5}\right) \} \) . Note that \( \left| {A \times B}\right| = 6 = \left| A\right| \times \left| B\right| \) .
- \( A \times A = \{ \left( {1,1}\right) ,\left( {1,2}\right) ,\left( {1,3}\right) ,\left( {2,1}\right) ,\left( {2,2}\right) ,\left( {2,3}\right) ,\left( {3,1}\right) ,\left( {3,2}\right) ,\left( {3,3}\right) \} \) . Note that \( \left| {A \times A}\right| = 9 = {\left| A\right| }^{2} \) .\n\nThese two examples illustrate the general rule that if \( A \) and \( B \) are finite sets, then \( \left| {A \times B}\right| = \left| A\right| \times \left| B\right| \) .
Yes
Example 1.4.1 An example of conversion to binary. To determine the binary representation of 41 we take the following steps:
-41 = 2 \u00d7 20 + 1; List = 1\n-20 = 2 \u00d7 10 + 0; List = 01\n-10 = 2 \u00d7 5 + 0; List = 001\n-5 = 2 \u00d7 2 + 1; List = 1001\n-2 = 2 \u00d7 1 + 0; List = 01001\n-1 = 2 \u00d7 0 + 1; List = 101001\n\nTherefore, 41 = 101001two
Yes
If the general terms in a series are more specific, the sum can often be simplified. For example, (a) \( \mathop{\sum }\limits_{{i = 1}}^{4}{i}^{2} = {1}^{2} + {2}^{2} + {3}^{2} + {4}^{2} = {30} \)
(a) \( \mathop{\sum }\limits_{{i = 1}}^{4}{i}^{2} = {1}^{2} + {2}^{2} + {3}^{2} + {4}^{2} = {30} \)
Yes
Example 1.5.4 Some generalized operations. If \( {A}_{1} = \{ 0,2,3\} ,{A}_{2} = \) \( \{ 1,2,3,6\} \), and \( {A}_{3} = \{ - 1,0,3,9\} \), then
\[ \mathop{\bigcap }\limits_{{i = 1}}^{3}{A}_{i} = {A}_{1} \cap {A}_{2} \cap {A}_{3} = \{ 3\} \] and \[ \mathop{\bigcup }\limits_{{i = 1}}^{3}{A}_{i} = {A}_{1} \cup {A}_{2} \cup {A}_{3} = \{ - 1,0,1,2,3,6,9\} . \]
Yes
Example 2.1.1 How many lunches can you have? A snack bar serves five different sandwiches and three different beverages. How many different lunches can a person order?
An alternative method of solution for this example is to make the simple observation that there are five different choices for sandwiches and three different choices for beverages, so there are \( 5 \cdot 3 = {15} \) different lunches that can be ordered.
Yes
Example 2.1.3 Counting elements in a cartesian product. Let \( A = \) \( \{ a, b, c, d, e\} \) and \( B = \{ 1,2,3\} \) . From Chapter 1 we know how to list the elements in \( A \times B = \{ \left( {a,1}\right) ,\left( {a,2}\right) ,\left( {a,3}\right) ,\ldots ,\left( {e,3}\right) \} \) .
Since the first entry of each pair can be any one of the five elements \( a, b, c, d \), and \( e \), and since the second can be any one of the three numbers \( 1,2 \), and 3, it is quite clear there are \( 5 \cdot 3 = {15} \) different elements in \( A \times B \) .
Yes
A person is to complete a true-false questionnaire consisting of ten questions. How many different ways are there to answer the questionnaire?
Since each question can be answered in either of two ways (true or false), and there are ten questions, there are\n\n\[ \n2 \cdot 2 \cdot 2 \cdot 2 \cdot 2 \cdot 2 \cdot 2 \cdot 2 \cdot 2 \cdot 2 = {2}^{10} = {1024} \n\]\n\ndifferent ways of answering the questionnaire.
Yes
Theorem 2.1.7 Power Set Cardinality Theorem. If \( A \) is a finite set, then \( \left| {\mathcal{P}\left( A\right) }\right| = {2}^{\left| A\right| } \) .
Proof: Consider how we might determine any \( B \in \mathcal{P}\left( A\right) \), where \( \left| A\right| = n \) . For each element \( x \in A \) there are two choices, either \( x \in B \) or \( x \notin B \) . Since there are \( n \) elements of \( A \) we have, by the rule of products, \[ \underset{n\text{ factors }}{\underbrace{2 \cdot 2 \cdot \cdots \cdot 2}} = {2}^{n} \] different subsets of \( A \) . Therefore, \( \mathcal{P}\left( A\right) = {2}^{n} \) .
Yes
How many different ways can we order the three different elements of the set \( A = \{ a, b, c\} \) ?
Since we have three choices for position one, two choices for position two, and one choice for the third position, we have, by the rule of products, \( 3 \cdot 2 \cdot 1 = 6 \) different ways of ordering the three letters. We illustrate through a tree diagram.
Yes
Example 2.2.3 Ordering a schedule. A student is taking five courses in the fall semester. How many different ways can the five courses be listed?
There are \( 5 \cdot 4 \cdot 3 \cdot 2 \cdot 1 = {120} \) different permutations of the set of courses.
Yes
If there are twenty-five players on the team, there are \( {25} \cdot {24} \cdot {23} \cdot \cdots \cdot 3 \cdot 2 \cdot 1 \) different permutations of the players.
This number of permutations is huge. In fact it is 15511210043330985984000000, but writing it like this isn't all that instructive, while leaving it as a product as we originally had makes it easier to see where the number comes from. We just need to find a more compact way of writing these products.
Yes
Example 2.2.6 Choosing Club Officers. A club of twenty-five members will hold an election for president, secretary, and treasurer in that order. Assume a person can hold only one position. How many ways are there of choosing these three officers?
By the rule of products there are \( {25} \cdot {24} \cdot {23} \) ways of making a selection.
Yes
Theorem 2.2.8 Permutation Counting Formula. The number of possible permutations of \( k \) elements taken from a set of \( n \) elements is\n\n\[ P\left( {n, k}\right) = n \cdot \left( {n - 1}\right) \cdot \left( {n - 2}\right) \cdot \cdots \cdot \left( {n - k + 1}\right) = \mathop{\prod }\limits_{{j = 0}}^{{k - 1}}\left( {n - j}\right) = \frac{n!}{\left( {n - k}\right) !}. \]
Proof. Case I: If \( k = n \) we have \( P\left( {n, n}\right) = n! = \frac{n!}{\left( {n - n}\right) !} \) .\n\nCase II: If \( 0 \leq k < n \), then we have \( k \) positions to fill using \( n \) elements and\n\n(a) Position 1 can be filled by any one of \( n - 0 = n \) elements\n\n(b) Position 2 can be filled by any one of \( n - 1 \) elements\n\n(c) \( \cdots \)\n\n(d) Position \( \mathrm{k} \) can be filled by any one of \( n - \left( {k - 1}\right) = n - k + 1 \) elements\n\nHence, by the rule of products,\n\n\[ P\left( {n, k}\right) = n \cdot \left( {n - 1}\right) \cdot \left( {n - 2}\right) \cdot \cdots \cdot \left( {n - k + 1}\right) = \frac{n!}{\left( {n - k}\right) !}. \]
Yes
Example 2.2.9 Another example of choosing officers. A club has eight members eligible to serve as president, vice-president, and treasurer. How many ways are there of choosing these officers?
Solution 1: Using the rule of products. There are eight possible choices for the presidency, seven for the vice-presidency, and six for the office of treasurer. By the rule of products there are \( 8 \cdot 7 \cdot 6 = {336} \) ways of choosing these officers.
Yes
To count the number of ways to order five courses, we can use the permutation formula. We want the number of permutations of five courses taken five at a time:
\[ P\left( {5,5}\right) = \frac{5!}{\left( {5 - 5}\right) !} = 5! = {120}. \]
Yes
a How many three-digit numbers can be formed if no repetition of digits can occur?
Solution 1: Using the rule of products. We have any one of five choices for digit one, any one of four choices for digit two, and three choices for digit three. Hence, \( 5 \cdot 4 \cdot 3 = {60} \) different three-digit numbers can be formed.\n\nSolution 2; Using the permutation formula. We want the total number of permutations of five digits taken three at a time:\n\n\[ P\left( {5,3}\right) = \frac{5!}{\left( {5 - 3}\right) !} = 5 \cdot 4 \cdot 3 = {60}. \]
Yes
Example 2.3.2 Some partitions of a four element set. Let \( A = \) \( \{ a, b, c, d\} \) . Examples of partitions of \( A \) are:\n\n\[ \n\\text{-}\{ \{ a\} ,\{ b\} ,\{ c, d\} \}\n\]\n\n\[ \n\\text{-}\{ \{ a, b\} ,\{ c, d\} \}\n\]\n\n\[ \n\\text{-}\{ \{ a\} ,\{ b\} ,\{ c\} ,\{ d\} \}\n\]\n\nHow many others are there, do you suppose?
There are 15 different partitions. The most efficient way to count them all is to classify them by the size of blocks. For example, the partition \( \{ \{ a\} ,\{ b\} ,\{ c, d\} \} \) has block sizes \( 1,1 \), and 2 .
Yes
Example 2.3.3 Some Integer Partitions. Two examples of partitions of set of integers \( \mathbb{Z} \) are\n\n\[ \text{-}\{ \{ n\} \mid n \in \mathbb{Z}\} \text{and} \]\n\n\[ \text{-}\{ \{ n \in \mathbb{Z} \mid n < 0\} ,\{ 0\} ,\{ n \in \mathbb{Z} \mid 0 < n\} \} \text{.} \]
The set of subsets \( \{ \{ n \in \mathbb{Z} \mid n \geq 0\} ,\{ n \in \mathbb{Z} \mid n \leq 0\} \} \) is not a partition because the two subsets have a nonempty intersection. A second example of a non-partition is \( \{ \{ n \in \mathbb{Z}\left| \right| n \mid = k\} \mid k = - 1,0,1,2,\cdots \} \) because one of the blocks, when \( k = - 1 \) is empty.
No
Theorem 2.3.4 The Basic Law Of Addition:. If \( A \) is a finite set, and if \( \left\{ {{A}_{1},{A}_{2},\ldots ,{A}_{n}}\right\} \) is a partition of \( A \), then\n\n\[ \left| A\right| = \left| {A}_{1}\right| + \left| {A}_{2}\right| + \cdots + \left| {A}_{n}\right| = \mathop{\sum }\limits_{{k = 1}}^{n}\left| {A}_{k}\right| \]
The basic law of addition can be rephrased as follows: If \( A \) is a finite set where \( {A}_{1} \cup {A}_{2} \cup \cdots \cup {A}_{n} = A \) and where \( {A}_{i} \cap {A}_{j} = \varnothing \) whenever \( i \neq j \), then\n\n\[ \left| A\right| = \left| {{A}_{1} \cup {A}_{2} \cup \cdots \cup {A}_{n}}\right| = \left| {A}_{1}\right| + \left| {A}_{2}\right| + \cdots + \left| {A}_{n}\right| \]
Yes
Example 2.3.7 Counting Students in Non-disjoint Classes. It was determined that all junior computer science majors take at least one of the following courses: Algorithms, Logic Design, and Compiler Construction. Assume the number in each course was 75,60 and 55 , respectively for the three courses listed. Further investigation indicated ten juniors took all three courses, twenty-five took Algorithms and Logic Design, twelve took Algorithms and Compiler Construction, and fifteen took Logic Design and Compiler Construction. How many junior C.S. majors are there?
Since all junior CS majors must take at least one of the courses, the number we want is: \n\n\[ \n\left| A\right| = \left| {{A}_{1} \cup {A}_{2} \cup {A}_{3}}\right| = \left| {A}_{1}\right| + \left| {A}_{2}\right| + \left| {A}_{3}\right| - \text{ repeats. } \n\] \n\nWe see that the whole universal set is naturally partitioned into subsets that are labeled by the numbers 1 through 8, and the set \( A \) is partitioned into subsets labeled 1 through 7. The region labeled 8 represents all students who are not junior CS majors. Note also that students in the subsets labeled 2, 3 , and 4 are double counted, and those in the subset labeled 1 are triple counted. To adjust, we must subtract the numbers in regions 2, 3 and 4. This can be done by subtracting the numbers in the intersections of each pair of sets. However, the individuals in region 1 will have been removed three times, just as they had been originally added three times. Therefore, we must finally add their number back in. \n\n\[ \n\left| A\right| = \left| {{A}_{1} \cup {A}_{2} \cup {A}_{3}}\right| \n\] \n\n\[ \n= \left| {A}_{1}\right| + \left| {A}_{2}\right| + \left| {A}_{3}\right| - \text{repeats} \n\] \n\n\[ \n= \left| {A}_{1}\right| + \left| {A}_{2}\right| + \left| {A}_{3}\right| - \text{duplicates + triplicates} \n\] \n\n\[ \n= \left| {A}_{1}\right| + \left| {A}_{2}\right| + \left| {A}_{3}\right| - \left( {\left| {{A}_{1} \cap {A}_{2}}\right| + \left| {{A}_{1} \cap {A}_{3}}\right| + \left| {{A}_{2} \cap {A}_{3}}\right| }\right) + \left| {{A}_{1} \cap {A}_{2} \cap {A}_{3}}\right| \n\] \n\n\[ \n= {75} + {60} + {55} - {25} - {12} - {15} + {10} = {148} \n\]
Yes
How many different ways are there to permute three letters from the set \( A = \{ a, b, c, d\} \) ?
From the Permutation Counting Formula there are \( P\left( {4,3}\right) = \frac{4!}{\left( {4 - 3}\right) !} = {24} \) different orderings of three letters from \( A \)
Yes
How many ways can we select a set of three letters from \( A = \{ a, b, c, d\} \) ? Note here that we are not concerned with the order of the three letters.
By trial and error, abc, abd, acd, and bcd are the only listings possible. To repeat, we were looking for all three-element subsets of the set \( A \) . Order is not important in sets. The notation for choosing 3 elements from 4 is most commonly \( \left( \begin{array}{l} 4 \\ 3 \end{array}\right) \) or occasionally \( C\left( {4,3}\right) \), either of which is read \
Yes
Theorem 2.4.4 Binomial Coefficient Formula. If \( n \) and \( k \) are nonnegative integers with \( 0 \leq k \leq n \), then the number \( k \) -element subsets of an \( n \) element set is equal to\n\n\[ \left( \begin{array}{l} n \\ k \end{array}\right) = \frac{n!}{\left( {n - k}\right) ! \cdot k!}. \]
Proof 1: There are \( k \) ! ways of ordering the elements of any \( k \) element set. Therefore,\n\n\[ \left( \begin{array}{l} n \\ k \end{array}\right) = \frac{P\left( {n, k}\right) }{k!} = \frac{n!}{\left( {n - k}\right) !k!}. \]
Yes
Assume an evenly balanced coin is tossed five times. In how many ways can three heads be obtained?
This is a combination problem, because the order in which the heads appear does not matter. We can think of this as a situation involving sets by considering the set of flips of the coin, 1 through 5 , in which heads comes up. The number of ways to get three heads is \( \left( \begin{array}{l} 5 \\ 3 \end{array}\right) = \frac{5 \cdot 4}{2 \cdot 1} = {10} \) .
Yes
We determine the total number of ordered ways a fair coin can land if tossed five consecutive times.
The five tosses can produce any one of the following mutually exclusive, disjoint events: 5 heads, 4 heads, 3 heads, 2 heads, 1 head, or 0 heads. For example, by the previous example, there are \( \left( \begin{array}{l} 5 \\ 3 \end{array}\right) = {10} \) sequences in which three heads appear. Counting the other possibilities in the same way, by the law of addition we have:\n\n\[ \left( \begin{array}{l} 5 \\ 5 \end{array}\right) + \left( \begin{array}{l} 5 \\ 4 \end{array}\right) + \left( \begin{array}{l} 5 \\ 3 \end{array}\right) + \left( \begin{array}{l} 5 \\ 2 \end{array}\right) + \left( \begin{array}{l} 5 \\ 1 \end{array}\right) + \left( \begin{array}{l} 5 \\ 0 \end{array}\right) = 1 + 5 + {10} + {10} + 5 + 1 = {32} \]\n\nways to observe the five flips.\n\nOf course, we could also have applied the extended rule of products, and since there are two possible outcomes for each of the five tosses, we have \( {2}^{5} = {32} \) ways.
Yes
A Committee of Five. A committee usually starts as an unstructured set of people selected from a larger membership. Therefore, a committee can be thought of as a combination. If a club of 25 members has a five-member social committee, there are \( \left( \begin{matrix} {25} \\ 5 \end{matrix}\right) = \frac{{25} \cdot {24} \cdot {23} \cdot {22} \cdot {21}}{5!} = {53130} \) different possible social committees. If any structure or restriction is placed on the way the social committee is to be selected, the number of possible committees will probably change. For example, if the club has a rule that the treasurer must be on the social committee, then the number of possibilities is reduced to \( \left( \begin{aligned} {24} \\ 4 \end{aligned}\right) = \frac{{24} \cdot {23} \cdot {22} \cdot {21}}{4!} = {10626} \) .
If we further require that a chairperson other than the treasurer be selected for the social committee, we have \( \left( \begin{matrix} {24} \\ 4 \end{matrix}\right) \cdot 4 = {42504} \) different possible social committees. The choice of the four non-treasurers accounts for the factor \( \left( \begin{matrix} {24} \\ 4 \end{matrix}\right) \) while the need to choose a chairperson accounts for the 4 .
Yes
Example 2.4.8 Binomial Coefficients - Extreme Cases. By simply applying the definition of a Binomial Coefficient as a number of subsets we see that there is \( \left( \begin{array}{l} n \\ 0 \end{array}\right) = 1 \) way of choosing a combination of zero elements from a set of \( n \) . In addition, we see that there is \( \left( \begin{array}{l} n \\ n \end{array}\right) = 1 \) way of choosing a combination of \( n \) elements from a set of \( n \) .
We could compute these values using the formula we have developed, but no arithmetic is really needed here.
Yes
Theorem 2.4.9 The Binomial Theorem. If \( n \geq 0 \), and \( x \) and \( y \) are numbers, then\n\n\[{\left( x + y\right) }^{n} = \mathop{\sum }\limits_{{k = 0}}^{n}\left( \begin{array}{l} n \\ k \end{array}\right) {x}^{n - k}{y}^{k}.\]
Proof. This theorem will be proven using a logical procedure called mathematical induction, which will be introduced in Chapter 3.
No
Find the third term in the expansion of \( {\left( x - y\right) }^{4} = {\left( x + \left( -y\right) \right) }^{4} \)
The third term, when \( k = 2 \) , is \( \left( \begin{array}{l} 4 \\ 2 \end{array}\right) {x}^{4 - 2}{\left( -y\right) }^{2} = 6{x}^{2}{y}^{2} \)
Yes
Expand \( {\left( 3x - 2\right) }^{3} \).
If we replace \( x \) and \( y \) in the Binomial Theorem with \( {3x} \) and -2, respectively, we get\n\n\[ \mathop{\sum }\limits_{{k = 0}}^{3}\left( \begin{array}{l} 3 \\ k \end{array}\right) {\left( 3x\right) }^{n - k}{\left( -2\right) }^{k} = \left( \begin{array}{l} 3 \\ 0 \end{array}\right) {\left( 3x\right) }^{3}{\left( -2\right) }^{0} + \left( \begin{array}{l} 3 \\ 1 \end{array}\right) {\left( 3x\right) }^{2}{\left( -2\right) }^{1} + \left( \begin{array}{l} 3 \\ 2 \end{array}\right) {\left( 3x\right) }^{1}{\left( -2\right) }^{2} + \left( \begin{array}{l} 3 \\ 3 \end{array}\right) {\left( 3x\right) }^{0}{\left( -2\right) }^{3}. \]\n\n\[ = {27}{x}^{3} - {54}{x}^{2} + {36x} - 8 \]
Yes
Definition 3.1.3 Logical Conjunction. If \( p \) and \( q \) are propositions, their conjunction, \( p \) and \( q \) (denoted \( p \land q \) ), is defined by the truth table
<table><thead><tr><th>\( p \)</th><th>q</th><th>\( p \land q \)</th></tr></thead><tr><td>0</td><td>0</td><td>0</td></tr><tr><td>0</td><td>1</td><td>0</td></tr><tr><td>1</td><td>0</td><td>0</td></tr><tr><td>1</td><td>1</td><td>1</td></tr></table>
Yes
The Identity Law can be verified with this truth table. The fact that \( \left( {p \land 1}\right) \leftrightarrow p \) is a tautology serves as a valid proof.
Table 3.4.2 Truth table to demonstrate the identity law for conjunction.\n\n\[ \begin{matrix} p & 1 & p \land 1 & \left( {p \land 1}\right) \leftrightarrow p \\ 0 & 1 & 0 & 1 \\ 1 & 1 & 1 & 1 \end{matrix} \]
Yes
A typical direct proof. This is a theorem: \( p \rightarrow r, q \rightarrow s, p \vee q \Rightarrow s \vee r \) .
Table 3.5.8 Direct proof of \( p \rightarrow r, q \rightarrow s, p \vee q \Rightarrow s \vee r \
No
Two proofs of the same theorem. Here are two direct proofs of \( \neg p \vee q, s \vee p,\neg q \Rightarrow s \) :
Table 3.5.10 Direct proof of \( \neg p \vee q, s \vee p,\neg q \Rightarrow s \
No
The following proof of \( p \rightarrow \left( {q \rightarrow s}\right) ,\neg r \vee p, q \Rightarrow r \rightarrow s \) includes \( r \) as a fourth premise. Inference of truth of \( s \) completes the proof.
1. \( \;\neg r \vee p\; \) Premise\n\n2. \( \;r\; \) Added premise\n\n3. \( \;p\;\left( 1\right) ,\left( 2\right) \), disjunction simplification\n\n4. \( p \rightarrow \left( {q \rightarrow s}\right) \; \) Premise\n\n5. \( q \rightarrow s\;\left( 3\right) ,\left( 4\right) \), detachment\n\n6. \( \;q \) Premise\n\n7. \( s \) (5), (6), detachment.
Yes
An Indirect proof of \( p \rightarrow r, q \rightarrow s, p \vee q \Rightarrow s \vee r \)
1. \( \;\neg \left( {s \vee r}\right) \; \) Negated conclusion\n\n2. \( \neg s \land \neg r\; \) DeMorgan’s Law,(1)\n\n3. \( \neg s\; \) Conjunctive simplification,(2)\n\n4. \( \;q \rightarrow s\; \) Premise\n\n5. \( \neg q\;\neg q \) Indirect reasoning,(3),(4)\n\n6. \( \;\neg r\; \) Conjunctive simplification,(2)\n\n7. \( \;p \rightarrow r\; \) Premise\n\n8. \( \neg p\; \) Indirect reasoning,(6),(7)\n\n9. \( \left( {\neg p}\right) \land \left( {\neg q}\right) \; \) Conjunctive, \( \left( 5\right) ,\left( 8\right) \)\n\n10. \( \neg \left( {p \vee q}\right) \; \) DeMorgan’s Law,(9)\n\n11. \( \;p \vee q\; \) Premise\n\n12. 0 (10), (11) \( ▱ \)
Yes
Here is an indirect proof of \( a \rightarrow b,\neg \left( {b \vee c}\right) \Rightarrow \neg a \) .
Table 3.5.18 Indirect proof of \( a \rightarrow b,\neg \left( {b \vee c}\right) \Rightarrow \neg a \)\n\n1. \( \;a\; \) Negation of the conclusion\n\n\[ \text{3.}\;b\;\left( 1\right) ,\left( 2\right) \text{, detachment} \]\n\n\[ \text{4.}\;b \vee c\;\left( 3\right) \text{, disjunctive addition} \]\n\n\[ \text{5.}\neg \left( {b \vee c}\right) \;\text{Premise} \]\n\n\[ \text{6. 0 (4), (5)}▱ \]
Yes
Consider the following proposition over the positive integers, which we will label \( p\left( n\right) \) : The sum of the positive integers from 1 to \( n \) is \( \frac{n\left( {n + 1}\right) }{2} \) .
In proving \( p\left( {99}\right) \Rightarrow p\left( {100}\right) \), we will use \( p\left( {99}\right) \) as our premise. We must prove: The sum of the positive integers from 1 to 100 is \( \frac{{100}\left( {{100} + 1}\right) }{2} \) . We start by observing that the sum of the positive integers from 1 to 100 is \( (1 + 2 + \cdots + {99}) + {100} \) . That is, the sum of the positive integers from 1 to 100 equals the sum of the first ninety-nine plus the final number, 100 . We can now apply our premise, \( p\left( {99}\right) \), to the sum \( 1 + 2 + \cdots + {99} \) . After rearranging our numbers, we obtain the desired expression for \( 1 + 2 + \cdots + {100} \) :\n\n\[ 1 + 2 + \cdots + {99} + {100} = \left( {1 + 2 + \cdots + {99}}\right) + {100} \]\n\n\[ = \frac{{99} \cdot \left( {{99} + 1}\right) }{2} + {100}\text{ by our assumption of }p\left( {99}\right) \]\n\n\[ = \frac{{99} \cdot {100}}{2} + \frac{2 \cdot {100}}{2} \]\n\n\[ = \frac{{100} \cdot {101}}{2} \]\n\n\[ = \frac{{100} \cdot \left( {{100} + 1}\right) }{2} \]\n\nWhat we've just done is analogous to checking two dominos in a line and finding that they are properly positioned. Since we are dealing with an infinite line, we must check all pairs at once. This is accomplished by proving that \( p\left( n\right) \Rightarrow p\left( {n + 1}\right) \) for all \( n \geq 1 \) :\n\n\[ 1 + 2 + \cdots + n + \left( {n + 1}\right) = \left( {1 + 2 + \cdots + n}\right) + \left( {n + 1}\right) \]\n\n\[ = \frac{n\left( {n + 1}\right) }{2} + \left( {n + 1}\right) \text{by}p\left( n\right) \]\n\n\[ = \frac{n\left( {n + 1}\right) }{2} + \frac{2\left( {n + 1}\right) }{2} \]\n\n\[ = \frac{\left( {n + 1}\right) \left( {n + 2}\right) }{2} \]\n\n\[ = \frac{\left( {n + 1}\right) \left( {\left( {n + 1}\right) + 1}\right) }{2} \]\n\nThey are all lined up! Now look at \( p\left( 1\right) \) : The sum of the positive integers from 1 to 1 is \( \frac{1 \cdot \left( {1 + 1}\)
Yes
Theorem 3.7.3 The Principle of Mathematical Induction. Let \( p\left( n\right) \) be a proposition over the positive integers. If\n\n(1) \( p\left( 1\right) \) is true, and\n\n(2) for all \( n \geq 1, p\left( n\right) \Rightarrow p\left( {n + 1}\right) \), \n\nthen \( p\left( n\right) \) is a tautology.
Note: The truth of \( p\left( 1\right) \) is called the basis for the induction proof. The premise that \( p\left( n\right) \) is true in the second part is called the induction hypothesis. The proof that \( p\left( n\right) \) implies \( p\left( {n + 1}\right) \) is called the induction step of the proof. Despite our analogy, the basis is usually done first in an induction proof. However, order doesn't really matter.
No
Consider the implication over the positive integers.\n\n\\[ p\\left( n\\right) : {q}_{0} \\rightarrow {q}_{1},{q}_{1} \\rightarrow {q}_{2},\\ldots ,{q}_{n - 1} \\rightarrow {q}_{n},{q}_{0} \\Rightarrow {q}_{n} \\]
A proof that \\( p\\left( n\\right) \\) is a tautology follows. Basis: \\( p\\left( 1\\right) \\) is \\( {q}_{0} \\rightarrow {q}_{1},{q}_{0} \\Rightarrow {q}_{1} \\) . This is the logical law of detachment which we know is true. If you haven't done so yet, write out the truth table of \\( \\left( {\\left( {{q}_{0} \\rightarrow {q}_{1}}\\right) \\land {q}_{0}}\\right) \\rightarrow {q}_{1} \\) to verify this step.\n\nInduction: Assume that \\( p\\left( n\\right) \\) is true for some \\( n \\geq 1 \\) . We want to prove that \\( p\\left( {n + 1}\\right) \\) must be true. That is:\n\n\\[ {q}_{0} \\rightarrow {q}_{1},{q}_{1} \\rightarrow {q}_{2},\\ldots ,{q}_{n - 1} \\rightarrow {q}_{n},{q}_{n} \\rightarrow {q}_{n + 1},{q}_{0} \\Rightarrow {q}_{n + 1} \\]\n\nHere is a direct proof of \\( p\\left( {n + 1}\\right) \\) :\n\n## Table 3.7.5\n\n<table><tr><td>Step</td><td>Proposition</td><td>Justification</td></tr><tr><td>\\( 1 - \\left( {n + 1}\\right) \\)</td><td>\\( {q}_{0} \\rightarrow {q}_{1},{q}_{1} \\rightarrow {q}_{2},\\ldots ,{q}_{n - 1} \\rightarrow {q}_{n},{q}_{0} \\)</td><td>Premises</td></tr><tr><td>\\( n + 2 \\)</td><td>\\( {q}_{n} \\)</td><td>\\( \\left( 1\\right) - \\left( {n + 1}\\right), p\\left( n\\right) \\)</td></tr><tr><td>\\( n + 3 \\)</td><td>\\( {q}_{n} \\rightarrow {q}_{n + 1} \\)</td><td>Premise</td></tr><tr><td>\\( n + 4 \\)</td><td>\\( {q}_{n + 1} \\)</td><td>\\( \\left( {n + 2}\\right) ,\\left( {n + 3}\\right) \\), detachment</td></tr></table>
Yes
For all \( n \geq 1 \) , \( {n}^{3} + {2n} \) is a multiple of 3.
Basis: \( {1}^{3} + 2\left( 1\right) = 3 \) is a multiple of 3 . The basis is almost always this easy!\n\nInduction: Assume that \( n \geq 1 \) and \( {n}^{3} + {2n} \) is a multiple of 3 . Consider \( {\left( n + 1\right) }^{3} + 2\left( {n + 1}\right) \) . Is it a multiple of 3 ?\n\n\[{\left( n + 1\right) }^{3} + 2\left( {n + 1}\right) = {n}^{3} + 3{n}^{2} + {3n} + 1 + \left( {{2n} + 2}\right)\]\n\n\[= {n}^{3} + {2n} + 3{n}^{2} + {3n} + 3\]\n\n\[= \left( {{n}^{3} + {2n}}\right) + 3\left( {{n}^{2} + n + 1}\right)\]\n\nYes, \( {\left( n + 1\right) }^{3} + 2\left( {n + 1}\right) \) is the sum of two multiples of 3 ; therefore, it is also a multiple of 3 . \( ▱ \)
Yes
A proof of the permutations formula. In Chapter 2, we stated that the number of different permutations of \( k \) elements taken from an \( n \) element set, \( P\left( {n;k}\right) \), can be computed with the formula \( \frac{n!}{\left( {n - k}\right) !} \) . We can prove this statement by induction on \( n \) . For \( n \geq 0 \), let \( q\left( n\right) \) be the proposition \[ P\left( {n;k}\right) = \frac{n!}{\left( {n - k}\right) !}\text{ for all }k,0 \leq k \leq n. \]
Basis: \( q\left( 0\right) \) states that \( P\left( {0;0}\right) \) if is the number of ways that 0 elements can be selected from the empty set and arranged in order, then \( P\left( {0;0}\right) = \frac{0!}{0!} = 1 \) . This is true. A general law in combinatorics is that there is exactly one way of doing nothing.\n\nInduction: Assume that \( q\left( n\right) \) is true for some natural number \( n \) . It is left for us to prove that this assumption implies that \( q\left( {n + 1}\right) \) is true. Suppose that we have a set of cardinality \( n + 1 \) and want to select and arrange \( k \) of its elements. There are two cases to consider, the first of which is easy. If \( k = 0 \) , then there is one way of selecting zero elements from the set; hence \[ P\left( {n + 1;0}\right) = 1 = \frac{\left( {n + 1}\right) !}{\left( {n + 1 + 0}\right) !} \] and the formula works in this case.\n\nThe more challenging case is to verify the formula when \( k \) is positive and less than or equal to \( n + 1 \) . Here we count the value of \( P\left( {n + 1;k}\right) \) by counting the number of ways that the first element in the arrangement can be filled and then counting the number of ways that the remaining \( k - 1 \) elements can be filled in using the induction hypothesis.\n\nThere are \( n + 1 \) possible choices for the first element. Since that leaves \( n \) elements to fill in the remaining \( k - 1 \) positions, there are \( P\left( {n;k - 1}\right) \) ways of completing the arrangement. By the rule of products, \[ P\left( {n + 1;k}\right) = \left( {n + 1}\right) P\left( {n;k - 1}\right) \] \[ = \left( {n + 1}\right) \frac{n!}{\left( {n - \left( {k - 1}\right) }\right) !} \] \[ = \frac{\left( {n + 1}\right) n!}{\left( {n - k + 1}\right) !} \] \[ = \frac{\left( {n + 1}\right) !}{\left( {\left( {n + 1}\right) - k}\right) !} \] \( ▱ \)
No
Theorem 3.7.10 Existence of Prime Factorizations. Every positive integer greater than or equal to 2 has a prime decomposition.
Proof. If you were to encounter this theorem outside the context of a discussion of mathematical induction, it might not be obvious that the proof can be done by induction. Recognizing when an induction proof is appropriate is mostly a matter of experience. Now on to the proof!\n\nBasis: Since 2 is a prime, it is already decomposed into primes (one of them).\n\nInduction: Suppose that for some \( n \geq 2 \) all of the integers \( 2,3,\ldots, n \) have a prime decomposition. Notice the course-of-value hypothesis. Consider \( n + 1 \) . Either \( n + 1 \) is prime or it isn’t. If \( n + 1 \) is prime, it is already decomposed into primes. If not, then \( n + 1 \) has a divisor, \( d \), other than 1 and \( n + 1 \) .\n\nHence, \( n + 1 = {cd} \) where both \( c \) and \( d \) are between 2 and \( n \) . By the induction hypothesis, \( c \) and \( d \) have prime decompositions, \( {c}_{1}{c}_{2}\cdots {c}_{s} \) and \( {d}_{1}{d}_{2}\cdots {d}_{t} \) , respectively. Therefore, \( n + 1 \) has the prime decomposition \( {c}_{1}{c}_{2}\cdots {c}_{s}{d}_{1}{d}_{2}\cdots {d}_{t} \) .
Yes
(a) \( {\left( \exists k\right) }_{\mathbb{Z}}\left( {{k}^{2} - k - {12} = 0}\right) \) is another way of saying that there is an integer that solves the equation \( {k}^{2} - k - {12} = 0 \) .
The fact that two such integers exist doesn't affect the truth of this proposition in any way.
No
Over the universe of animals, define \( F\left( x\right) : x \) is a fish and \( W\left( x\right) : x \) lives in the water. We know that the proposition \( W\left( x\right) \rightarrow F\left( x\right) \) is not always true. In other words, \( \left( {\forall x}\right) \left( {W\left( x\right) \rightarrow F\left( x\right) }\right) \) is false. Another way of stating this fact is that there exists an animal that lives in the water and is not a fish; that is,
\[ \neg \left( {\forall x}\right) \left( {W\left( x\right) \rightarrow F\left( x\right) }\right) \Leftrightarrow \left( {\exists x}\right) \left( {\neg \left( {W\left( x\right) \rightarrow F\left( x\right) }\right) }\right) \] \[ \Leftrightarrow \left( {\exists x}\right) \left( {W\left( x\right) \land \neg F\left( x\right) }\right) \]
Yes
The Sum of Odd Integers. We will outline a proof that the sum of any two odd integers is even. Our first step will be to write the theorem in the familiar conditional form: If \( j \) and \( k \) are odd integers, then \( j + k \) is even.
The premise and conclusion of this theorem should be clear now. Notice that if \( j \) and \( k \) are not both odd, then the conclusion may or may not be true. Our only objective is to show that the truth of the premise forces the conclusion to be true. Therefore, we can express the integers \( j \) and \( k \) in the form that all odd integers take; that is:\n\n\[ n \in \mathbb{Z}\text{is odd implies that}\left( {\exists m \in \mathbb{Z}}\right) \left( {n = {2m} + 1}\right) \]\n\nThis observation allows us to examine the sum \( j + k \) and to verify that it must be even.
No
The Square of an Even Integer. Let \( n \in \mathbb{Z} \). We will outline a proof that \( {n}^{2} \) is even if and only if \( n \) is even.
Outline of a proof: Since this is an \
No
Example 3.9.3 \( \sqrt{2} \) is irrational.
Our final example will be an outline of the proof that the square root of 2 is irrational (not an element of \( \mathbb{Q} \) ). This is an example of the theorem that does not appear to be in the standard \( P \Rightarrow C \) form. One way to rephrase the theorem is: If \( x \) is a rational number, then \( {x}^{2} \neq 2 \) . A direct proof of this theorem would require that we verify that the square of every rational number is not equal to 2 . There is no convenient way of doing this, so we must turn to the indirect method of proof. In such a proof, we assume that \( x \) is a rational number and that \( {x}^{2} = 2 \) . This will lead to a contradiction. In order to reach this contradiction, we need to use the following facts:\n\n- A rational number is a quotient of two integers.\n\n- Every fraction can be reduced to lowest terms, so that the numerator and denominator have no common factor greater than 1.\n\n- If \( n \) is an integer, \( {n}^{2} \) is even if and only if \( n \) is even.
No
Example 4.1.2 Disproving distributivity of addition over multiplication. From basic algebra we learned that multiplication is distributive over addition. Is addition distributive over multiplication? That is, is \( a + \left( {b \cdot c}\right) = \) \( \left( {a + b}\right) \cdot \left( {a + c}\right) \) always true?
If we choose the values \( a = 3, b = 4 \), and \( c = 1 \) , we find that \( 3 + \left( {4 \cdot 1}\right) \neq \left( {3 + 4}\right) \cdot \left( {3 + 1}\right) \) . Therefore, this set of values serves as a counterexample to a distributive law of addition over multiplication.
Yes
Theorem 4.1.7 The Distributive Law of Intersection over Union. If \( A, B \), and \( C \) are sets, then \( A \cap \left( {B \cup C}\right) = \left( {A \cap B}\right) \cup \left( {A \cap C}\right) \) .
Proof. What we can assume: \( A, B \), and \( C \) are sets.\n\nWhat we are to prove: \( A \cap \left( {B \cup C}\right) = \left( {A \cap B}\right) \cup \left( {A \cap C}\right) \).\n\nCommentary: What types of objects am I working with: sets? real numbers? propositions? The answer is sets: sets of elements that can be anything you care to imagine. The universe from which we draw our elements plays no part in the proof of this theorem.\n\nWe need to show that the two sets are equal. Let's call them the left-hand set \( \left( {LHS}\right) \) and the right-hand set \( \left( {RHS}\right) \). To prove that \( {LHS} = {RHS} \), we must prove two things: (a) \( {LHS} \subseteq {RHS} \), and (b) \( {RHS} \subseteq {LHS} \).\n\nTo prove part a and, similarly, part b, we must show that each element of \( {LHS} \) is an element of \( {RHS} \). Once we have diagnosed the problem we are ready to begin.\n\nWe must prove: (a) \( A \cap \left( {B \cup C}\right) \subseteq \left( {A \cap B}\right) \cup \left( {A \cap C}\right) \).\n\nLet \( x \in A \cap \left( {B \cup C}\right) \):\n\n\[ x \in A \cap \left( {B \cup C}\right) \Rightarrow x \in A\text{ and }\left( {x \in B\text{ or }x \in C}\right) \]\n\ndef. of union and intersection\n\n\[ \Rightarrow \left( {x \in A\text{ and }x \in B}\right) \text{ or }\left( {x \in A\text{ and }x \in C}\right) \]\n\ndistributive law of logic\n\n\[ \Rightarrow \left( {x \in A \cap B}\right) \text{ or }\left( {x \in A \cap C}\right) \]\n\ndef. of intersection\n\n\[ \Rightarrow x \in \left( {A \cap B}\right) \cup \left( {A \cap C}\right) \]\n\ndef. of union\n\nWe must also prove (b) \( \left( {A \cap B}\right) \cup \left( {A \cap C}\right) \subseteq A \cap \left( {B \cup C}\right) \).\n\n\[ x \in \left( {A \cap B}\right) \cup \left( {A \cap C}\right) \Rightarrow \left( {x \in A \cap B}\right) \text{ or }\left( {x \in A \cap C}\right) \]\n\n\[ \text{Why?} \]\n\n\[ \Rightarrow \left( {x \in A\text{ and }x \in B}\right) \text{ or }\left( {x \in A\text{ and }x \in C}\right) \]\n\n\[ \text{Why?} \]\n\n\[ \Rightarrow x \in A\text{and}\left( {x \in B\text{or}x \in C}\right) \]\n\n\[ \text{Why?} \]\n\n\[ \Rightarrow x \in A \cap \left( {B \cup C}\right) \]\n\n\[ \text{Why?}▱ \]
No
Theorem 4.1.8 Another Proof using Definitions. If \( A, B \), and \( C \) are any sets, then \( A \times \left( {B \cap C}\right) = \left( {A \times B}\right) \cap \left( {A \times C}\right) \) .
Proof. Commentary; We again ask ourselves: What are we trying to prove? What types of objects are we dealing with? We realize that we wish to prove two facts: (a) \( {LHS} \subseteq {RHS} \), and (b) \( {RHS} \subseteq {LHS} \) .\n\nTo prove part (a), and similarly part (b), we'll begin the same way. Let\n\n___ \( \in {LHS} \) to show ___ \( \in {RHS} \) . What should ___ be? What does a\n\ntypical object in the \( {LHS} \) look like?\n\nNow, on to the actual proof.\n\n(a) \( A \times \left( {B \cap C}\right) \subseteq \left( {A \times B}\right) \cap \left( {A \times C}\right) \).\n\nLet \( \left( {x, y}\right) \in A \times \left( {B \cap C}\right) \).\n\n\[ \left( {x, y}\right) \in A \times \left( {B \cap C}\right) \Rightarrow x \in A\text{ and }y \in \left( {B \cap C}\right) \]\n\n\[ \text{Why?} \]\n\n\[ \Rightarrow x \in A\text{and}\left( {y \in B\text{and}y \in C}\right) \]\n\n\[ \text{Why?} \]\n\n\[ \Rightarrow \left( {x \in A\text{ and }y \in B}\right) \text{ and }\left( {x \in A\text{ and }y \in C}\right) \]\n\n\[ \text{Why?} \]\n\n\[ \Rightarrow \left( {x, y}\right) \in \left( {A \times B}\right) \text{ and }\left( {x, y}\right) \in \left( {A \times C}\right) \]\n\n\[ \text{Why?} \]\n\n\[ \Rightarrow \left( {x, y}\right) \in \left( {A \times B}\right) \cap \left( {A \times C}\right) \]\n\n\[ \text{Why?} \]\n\n(b) \( \left( {A \times B}\right) \cap \left( {A \times C}\right) \subseteq A \times \left( {B \cap C}\right) \).\n\nLet \( \left( {x, y}\right) \in \left( {A \times B}\right) \cap \left( {A \times C}\right) \).\n\n\[ \left( {x, y}\right) \in \left( {A \times B}\right) \cap \left( {A \times C}\right) \Rightarrow \left( {x, y}\right) \in A \times B\text{ and }\left( {x, y}\right) \in A \times C \]\n\n\[ \text{Why?} \]\n\n\[ \Rightarrow \left( {x \in A\text{ and }y \in B}\right) \text{ and }\left( {x \in A\text{ and }y \in C}\right) \]\n\n\[ \text{Why?} \]\n\n\[ \Rightarrow x \in A\text{and}\left( {y \in B\text{and}y \in C}\right) \]\n\n\[ \text{Why?} \]\n\n\[ \Rightarrow x \in A\text{and}y \in \left( {B \cap C}\right) \]\n\n\[ \text{Why?} \]\n\n\[ \Rightarrow \left( {x, y}\right) \in A \times \left( {B \cap C}\right) \]\n\n\[ \text{Why?} \]
Yes
A Corollary to the Distributive Law of Sets. Let \( A \) and \( B \) be sets. Then \( \left( {A \cap B}\right) \cup \left( {A \cap {B}^{c}}\right) = A \)
\[ \left( {A \cap B}\right) \cup \left( {A \cap {B}^{c}}\right) = A \cap \left( {B \cup {B}^{c}}\right) \] \[ \text{Why?} \] \[ = A \cap U \] \[ \text{Why?} \] \[ = A \] \[ \text{Why?} \]
No
Theorem 4.2.3 An Indirect Proof in Set Theory. Let \( A, B, C \) be sets. If \( A \subseteq B \) and \( B \cap C = \varnothing \), then \( A \cap C = \varnothing \) .
Proof. Commentary: The usual and first approach would be to assume \( A \subseteq B \) and \( B \cap C = \varnothing \) is true and to attempt to prove \( A \cap C = \varnothing \) is true. To do this you would need to show that nothing is contained in the set \( A \cap C \) . Think about how you would show that something doesn't exist. It is very difficult to do directly.\n\nThe Indirect Method is much easier: If we assume the conclusion is false and we obtain a contradiction - then the theorem must be true. This approach is on sound logical footing since it is exactly the same method of indirect proof that we discussed in Subsection 3.5.3.\n\nAssume \( A \subseteq B \) and \( B \cap C = \varnothing \), and \( A \cap C \neq \varnothing \) . To prove that this cannot occur, let \( x \in A \cap C \) .\n\n\[ x \in A \cap C \Rightarrow x \in A\text{ and }x \in C \]\n\n\[ \Rightarrow x \in B\text{and}x \in C\text{.} \]\n\n\[ \Rightarrow x \in B \cap C \]\n\nBut this contradicts the second premise. Hence, the theorem is proven.
Yes
How can we use set operations applied to \( {B}_{1} \) and \( {B}_{2} \) produce a partition of \( A \) ?
As a first attempt, we might try these three sets:\n\nTable 4.3.6\n\n\[ \n{B}_{1} \cap {B}_{2} = \{ 1,3\} \n\]\n\n\[ \n{B}_{1}^{c} = \{ 2,4,6\} \n\]\n\n\[ \n{B}_{2}^{c} = \{ 4,5,6\} \n\]\n\nWe have produced all elements of \( A \) but we have 4 and 6 repeated in two sets. In place of \( {B}_{1}^{c} \) and \( {B}_{2}^{c} \), let’s try \( {B}_{1}^{c} \cap {B}_{2} \) and \( {B}_{1} \cap {B}_{2}^{c} \), respectively:\n\nTable 4.3.7\n\n\[ \n{B}_{1}^{c} \cap {B}_{2} = \{ 2\} \text{and} \n\]\n\n\[ \n{B}_{1} \cap {B}_{2}^{c} = \{ 5\} \text{.} \n\]\n\nWe have now produced the elements \( 1,2,3 \), and 5 using \( {B}_{1} \cap {B}_{2},{B}_{1}^{c} \cap {B}_{2} \) and \( {B}_{1} \cap {B}_{2}^{c} \) yet we have not listed the elements 4 and 6 . Most ways that we could combine \( {B}_{1} \) and \( {B}_{2} \) such as \( {B}_{1} \cup {B}_{2} \) or \( {B}_{1} \cup {B}_{2}^{c} \) will produce duplications of listed elements and will not produce both 4 and 6 . However we note that \( {B}_{1}^{c} \cap {B}_{2}^{c} = \{ 4,6\} \), exactly the elements we need.\n\nAfter more experimenting, we might reach a conclusion that each element of \( A \) appears exactly once in one of the four minsets \( {B}_{1} \cap {B}_{2},{B}_{1}^{c} \cap {B}_{2},{B}_{1} \cap {B}_{2}^{c} \) and \( {B}_{1}^{c} \cap {B}_{2}^{c} \) . Hence, we have a partition of \( A \) . In fact this is the finest partition of \( A \) in that all other partitions we could generate consist of selected unions of these minsets.
Yes
Theorem 4.3.8 Minset Partition Theorem. Let \( A \) be a set and let \( {B}_{1} \) , \( {B}_{2}\ldots ,{B}_{n} \) be subsets of \( A \) . The set of nonempty minsets generated by \( {B}_{1},{B}_{2} \) \( \ldots ,{B}_{n} \) is a partition of \( A \) .
Proof. The proof of this theorem is left to the reader.
No
Example 4.3.10 Another Concrete Example of Minsets. Let \( U = \) \( \{ - 2, - 1,0,1,2\} ,{B}_{1} = \{ 0,1,2\} \), and \( {B}_{2} = \{ 0,2\} \) . Then
Table 4.3.11\n\n\[ \n{B}_{1} \cap {B}_{2} = \{ 0,2\} \]\n\n\[ \n{B}_{1}^{c} \cap {B}_{2} = \varnothing \]\n\n\[ \n{B}_{1} \cap {B}_{2}^{c} = \{ 1\} \]\n\n\[ \n{B}_{1}^{c} \cap {B}_{2}^{c} = \{ - 2, - 1\} \]\n\nIn this case, there are only three nonempty minsets, producing the partition \( \{ \{ 0,2\} ,\{ 1\} ,\{ - 2, - 1\} \} \) . An example of a set that could not be produced from just \( {B}_{1} \) and \( {B}_{2} \) is the set of even elements of \( U,\{ - 2,0,2\} \) . This is because -2 and -1 cannot be separated. They are in the same minset and any union of minsets either includes or excludes them both. In general, there are \( {2}^{3} = 8 \) different minset normal forms because there are three nonempty minsets. This means that only 8 of the \( {2}^{5} = {32} \) subsets of \( U \) could be generated from any two sets \( {B}_{1} \) and \( {B}_{2} \) .
Yes
Is the matrix \( A = \left( \begin{array}{ll} 1 & 2 \\ 3 & 4 \end{array}\right) \) equal to the matrix \( B = \left( \begin{array}{ll} 1 & 2 \\ 3 & 5 \end{array}\right) ? \)
No, they are not because the corresponding entries in the second row, second column of the two matrices are not equal.
Yes
Example 5.1.6 A Scalar Product. If \( c = 3 \) and if \( A = \left( \begin{matrix} 1 & - 2 \\ 3 & 5 \end{matrix}\right) \) and we wish to find \( {cA} \), it seems natural to multiply each entry of \( A \) by 3 so that \( {3A} = \left( \begin{matrix} 3 & - 6 \\ 9 & {15} \end{matrix}\right) \), and this is precisely the way scalar multiplication is defined.
Definition 5.1.7 Scalar Multiplication. Let \( A \) be an \( m \times n \) matrix and \( c \) a scalar. Then \( {cA} \) is the \( m \times n \) matrix obtained by multiplying \( c \) times each entry of \( A \) ; that is \( {\left( cA\right) }_{ij} = c{a}_{ij} \).
Yes
Example 5.1.10 A Matrix Product. Let \( A = \left( \begin{matrix} 1 & 0 \\ 3 & 2 \\ - 5 & 1 \end{matrix}\right) \), a \( 3 \times 2 \) matrix, and let \( B = \left( \begin{array}{l} 6 \\ 1 \end{array}\right) \), a \( 2 \times 1 \) matrix. Then \( {AB} \) is a \( 3 \times 1 \) matrix:
\[ {AB} = \left( \begin{matrix} 1 & 0 \\ 3 & 2 \\ - 5 & 1 \end{matrix}\right) \left( \begin{array}{l} 6 \\ 1 \end{array}\right) = \left( \begin{matrix} 1 \cdot 6 + 0 \cdot 1 \\ 3 \cdot 6 + 2 \cdot 1 \\ - 5 \cdot 6 + 1 \cdot 1 \end{matrix}\right) = \left( \begin{matrix} 6 \\ {20} \\ - {29} \end{matrix}\right) \]
Yes
Example 5.1.11 Multiplication with a diagonal matrix. Let \( A = \left( \begin{matrix} - 1 & 0 \\ 0 & 3 \end{matrix}\right) \) and \( B = \left( \begin{matrix} 3 & 10 \\ 2 & 1 \end{matrix}\right) \) . Then \( AB = \left( \begin{matrix} - 1 \cdot 3 + 0 \cdot 2 & - 1 \cdot 10 + 0 \cdot 1 \\ 0 \cdot 3 + 3 \cdot 2 & 0 \cdot 10 + 3 \cdot 1 \end{matrix}\right) = \left( \begin{matrix} - 3 & - 10 \\ 6 & 3 \end{matrix}\right) \)
The net effect is to multiply the first row of \( B \) by -1 and the second row of \( B \) by 3 .
Yes
In the example above, the \( 3 \times 3 \) diagonal matrix \( I \) whose diagonal entries are all 1’s has the distinctive property that for any other \( 3 \times 3 \) matrix \( A \) we have \( {AI} = {IA} = A \) .
Example 5.2.3 Multiplyi
No
Theorem 5.2.6 Inverses are unique. The inverse of an \( n \times n \) matrix \( A \) , when it exists, is unique.
Proof. Let \( A \) be an \( n \times n \) matrix. Assume to the contrary, that \( A \) has two (different) inverses, say \( B \) and \( C \) . Then\n\n\( B = {BI}\; \) Identity property of \( I \)\n\n\( = B\left( {AC}\right) \; \) Assumption that \( C \) is an inverse of \( A \)\n\n\( = \left( {BA}\right) C\; \) Associativity of matrix multiplication\n\n\( = {IC}\; \) Assumption that \( B \) is an inverse of \( A \)\n\n\( = C\; \) Identity property of \( I \)
Yes
If \( A = \left( \begin{matrix} 1 & 2 \\ - 3 & 5 \end{matrix}\right) \) then \( \det A = 1 \cdot 5 - 2 \cdot \left( {-3}\right) = {11} \) .
If \( A = \left( \begin{matrix} 1 & 2 \\ - 3 & 5 \end{matrix}\right) \) then \( \det A = 1 \cdot 5 - 2 \cdot \left( {-3}\right) = {11} \).
Yes
Theorem 5.2.9 Inverse of 2 by 2 matrix. Let \( A = \left( \begin{array}{ll} a & b \\ c & d \end{array}\right) \) . If \( \det A \neq 0, \) then \( {A}^{-1} = \frac{1}{\det A}\left( \begin{matrix} d & - b \\ - c & a \end{matrix}\right) .
Proof. See Exercise 4 at the end of this section.
No
Can we find the inverses of the matrices in Example 5.2.8? If \( A = \left( \begin{matrix} 1 & 2 \\ - 3 & 5 \end{matrix}\right) \) then
\[ {A}^{-1} = \frac{1}{11}\left( \begin{matrix} 5 & - 2 \\ 3 & 1 \end{matrix}\right) = \left( \begin{matrix} \frac{5}{11} & - \frac{2}{11} \\ \frac{3}{11} & \frac{1}{11} \end{matrix}\right) \] The reader should verify that \( A{A}^{-1} = {A}^{-1}A = I \) .
No
Example 6.2.5 Ordering subsets of a two element universe. Let \( B = \{ 1,2\} \), and let \( A = \mathcal{P}\left( B\right) = \{ \varnothing ,\{ 1\} ,\{ 2\} ,\{ 1,2\} \} \) . Then \( \subseteq \) is a relation on \( A \) whose digraph is Figure 6.2.6.
![6dd34435-8451-4aec-abdd-96bf3c6137fe_120_0.jpg](images/6dd34435-8451-4aec-abdd-96bf3c6137fe_120_0.jpg)\n\nFigure 6.2.6 Graph for set containment on subsets of \( \{ 1,2\} \)
Yes
Example 6.3.5 Set Containment as a Partial Ordering. Let \( A \) be a set. Then \( \mathcal{P}\left( A\right) \) together with the relation \( \subseteq \) (set containment) is a poset.
- Let \( B \in \mathcal{P}\left( A\right) \) . The fact that \( B \subseteq B \) follows from the definition of subset. Hence, set containment is reflexive.\n\n- Let \( {B}_{1},{B}_{2} \in \mathcal{P}\left( A\right) \) and assume that \( {B}_{1} \subseteq {B}_{2} \) and \( {B}_{1} \neq {B}_{2} \) . Could it be that \( {B}_{2} \subseteq {B}_{1} \) ? No. There must be some element \( a \in A \) such that \( a \notin {B}_{1} \), but \( a \in {B}_{2} \) . This is exactly what we need to conclude that \( {B}_{2} \) is not contained in \( {B}_{1} \) . Hence, set containment is antisymmetric.\n\n- Let \( {B}_{1},{B}_{2},{B}_{3} \in \mathcal{P}\left( A\right) \) and assume that \( {B}_{1} \subseteq {B}_{2} \) and \( {B}_{2} \subseteq {B}_{3} \) . Does it follow that \( {B}_{1} \subseteq {B}_{3} \) ? Yes, if \( a \in {B}_{1} \), then \( a \in {B}_{2} \) because \( {B}_{1} \subseteq {B}_{2} \) . Now that we have \( a \in {B}_{2} \) and we have assumed \( {B}_{2} \subseteq {B}_{3} \), we conclude that \( a \in {B}_{3} \) . Therefore, \( {B}_{1} \subseteq {B}_{3} \) and so set containment is transitive.
Yes
Consider the partial ordering relation \( s \) whose Hasse diagram is Figure 6.3.8.
Certainly \( A = \{ 1,2,3,4,5\} \) and \( {1s2},{3s4},{1s4},{1s5} \) , etc., Notice that \( {1s5} \) is implied by the fact that there is a path of length three upward from 1 to 5 . This follows from the edges that are shown and the transitive property that is presumed in a poset. Since \( {1s3} \) and \( {3s4} \), we know that \( {1s4} \) . We then combine \( {1s4} \) with \( {4s5} \) to infer \( {1s5} \) . Without going into details why, here is a complete list of pairs defined by \( s \) .\n\n\[ s = \{ \left( {1,1}\right) ,\left( {2,2}\right) ,\left( {3,3}\right) ,\left( {4,4}\right) ,\left( {5,5}\right) ,\left( {1,3}\right) ,\left( {1,4}\right) ,\left( {1,5}\right) ,\left( {1,2}\right) ,\left( {3,4}\right) ,\left( {3,5}\right) ,\left( {4,5}\right) ,(2, \]
Yes
Example 6.4.2 A simple example. Let \( A = \{ 2,5,6\} \) and let \( r \) be the relation \( \{ \left( {2,2}\right) ,\left( {2,5}\right) ,\left( {5,6}\right) ,\left( {6,6}\right) \} \) on \( A \) . Since \( r \) is a relation from \( A \) into the same set \( A \) (the \( B \) of the definition), we have \( {a}_{1} = 2,{a}_{2} = 5 \), and \( {a}_{3} = 6 \), while \( {b}_{1} = 2,{b}_{2} = 5 \), and \( {b}_{3} = 6 \) .
Next, since\n\n- \( {2r2} \), we have \( {R}_{11} = 1 \)\n\n- \( {2r5} \), we have \( {R}_{12} = 1 \)\n\n- \( {5r6} \), we have \( {R}_{23} = 1 \)\n\n- \( {6r6} \), we have \( {R}_{33} = 1 \)\n\nAll other entries of \( R \) are zero, so\n\n\[ R = \left( \begin{array}{lll} 1 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \end{array}\right) \]
Yes
Example 6.4.5 Composition by Multiplication. Suppose that \( R = \left( \begin{array}{llll} 0 & 1 & 0 & 0 \\ 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{array}\right) \) and \( S = \left( \begin{array}{llll} 0 & 1 & 1 & 1 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \end{array}\right) \) . Then using Boolean arithmetic,
\( {RS} = \left( \begin{array}{llll} 0 & 0 & 1 & 1 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 \end{array}\right) \) and \( {SR} = \left( \begin{array}{llll} 1 & 1 & 1 & 1 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right) .\)
Yes
Example 6.4.7 Relations and Information. This final example gives an insight into how relational data base programs can systematically answer questions pertaining to large masses of information. Matrices \( R \) (on the left) and \( S \) (on the right) define the relations \( r \) and \( s \) where \( {arb} \) if software \( a \) can be run with operating system \( b \), and \( {bsc} \) if operating system \( b \) can run on computer c.
Although the relation between the software and computers is not implicit from the data given, we can easily compute this information. The matrix of \( {rs} \) is \( {RS} \), which is \[ \begin{matrix} & \mathrm{C}1 & \mathrm{C}2 & \mathrm{C}3 \\ \mathrm{P}1 & & & \\ \mathrm{P}2 & & & \\ \mathrm{P}3 & & & \\ \mathrm{P}4 & & & \end{matrix}\left( \begin{matrix} 1 & 1 & 1 \\ 1 & 1 & 0 \\ 0 & 1 & 1 \\ 0 & 1 & 1 \end{matrix}\right) \] This matrix tells us at a glance which software will run on the computers listed. In this case, all software will run on all computers with the exception of program P2, which will not run on the computer C3, and programs P3 and P4, which will not run on the computer C1.
Yes
Theorem 6.5.4 Matrix of a Transitive Closure. Let \( r \) be a relation on a finite set and \( R \) its matrix. Let \( {R}^{ + } \) be the matrix of \( {r}^{ + } \), the transitive closure of \( r \) . Then \( {R}^{ + } = R + {R}^{2} + \cdots + {R}^{n} \), using Boolean arithmetic.
Using this theorem, we find \( {R}^{ + } \) is the \( 5 \times 5 \) matrix consisting of all \( {1}^{\prime }s \) , thus, \( {r}^{ + } \) is all of \( A \times A \) .
No
Let \( A \) be the set of students who are sitting in a classroom, let \( B \) be the set of seats in the classroom, and let \( s \) be the function which maps each student into the chair he or she is sitting in. When is \( s \) one to one? When is it onto?
Under normal circumstances, \( s \) would always be injective since no two different students would be in the same seat. In order for \( s \) to be surjective, we need all seats to be used, so \( s \) is a surjection if the classroom is filled to capacity.
Yes
Example 7.2.9 Counting the Alphabet. The alphabet \( \{ A, B, C,\ldots, Z\} \) has cardinality 26 through the following bijection into the set \( \{ 1,2,3,\ldots ,{26}\} \) .
\[ \begin{matrix} A & B & C & \cdots & Z \\ \downarrow & \downarrow & \downarrow & \cdots & \downarrow \\ 1 & 2 & 3 & \cdots & {26} \end{matrix}. \]
Yes
As many evens as all positive integers. Recall that \( 2\mathbb{P} = \{ b \in \mathbb{P} \mid b = {2k} \) for some \( k \in \mathbb{P}\} \) . Paradoxically, \( 2\mathbb{P} \) has the same cardinality as the set \( \mathbb{P} \) of positive integers. To prove this, we must find a bijection from \( \mathbb{P} \) to \( 2\mathbb{P} \) .
Such a function isn’t unique, but this one is the simplest: \( f : \mathbb{P} \rightarrow 2\mathbb{P} \) where \( f\left( m\right) = {2m} \) . Two statements must be proven to justify our claim that \( f \) is a bijection:\n\n- \( f \) is one-to-one.\n\nProof: Let \( a, b \in \mathbb{P} \) and assume that \( f\left( a\right) = f\left( b\right) \) . We must prove that \( a = b \) .\n\n\[ f\left( a\right) = f\left( b\right) \Rightarrow {2a} = {2b} \Rightarrow a = b. \]\n\n- \( f \) is onto.\n\nProof: Let \( b \in 2\mathbb{P} \) . We want to show that there exists an element \( a \in \mathbb{P} \) such that \( f\left( a\right) = b \) . If \( b \in 2\mathbb{P}, b = {2k} \) for some \( k \in \mathbb{P} \) by the definition of \( 2\mathbb{P} \) . So we have \( f\left( k\right) = {2k} = b \) . Hence, each element of \( 2\mathbb{P} \) is the image of some element of \( \mathbb{P} \) .
Yes