Q
stringlengths
4
3.96k
A
stringlengths
1
3k
Result
stringclasses
4 values
Lemma 3.3.15 Suppose a matrix is of the form\n\n\\[ \nM = \\left( \\begin{array}{ll} A & * \\\\ \\mathbf{0} & a \\end{array}\\right) \n\\]\n\n\\( \\left( {3.13}\\right) \\)\n\nor\n\\[\nM = \\left( \\begin{array}{ll} A & \\mathbf{0} \\\\ * & a \\end{array}\\right) \n\\]\n\n(3.14)\n\nwhere a is a number and \\( A \\) is an \\( \\left( {n - 1}\\right) \\times \\left( {n - 1}\\right) \\) matrix and \\( * \\) denotes either a column or a row having length \\( n - 1 \\) and the 0 denotes either a column or a row of length \\( n - 1 \\) consisting entirely of zeros. Then \\( \\det \\left( M\\right) = a\\det \\left( A\\right) \\) .
Proof: Denote \\( M \\) by \\( \\left( {m}_{ij}\\right) \\) . Thus in the first case, \\( {m}_{nn} = a \\) and \\( {m}_{ni} = 0 \\) if \\( i \\neq n \\) while in the second case, \\( {m}_{nn} = a \\) and \\( {m}_{in} = 0 \\) if \\( i \\neq n \\) . From the definition of the determinant,\n\n\\[\n\\det \\left( M\\right) \\equiv \\mathop{\\sum }\\limits_{\\left( {k}_{1},\\cdots ,{k}_{n}\\right) }{\\operatorname{sgn}}_{n}\\left( {{k}_{1},\\cdots ,{k}_{n}}\\right) {m}_{1{k}_{1}}\\cdots {m}_{n{k}_{n}}\n\\]\n\nLetting \\( \\theta \\) denote the position of \\( n \\) in the ordered list, \\( \\left( {{k}_{1},\\cdots ,{k}_{n}}\\right) \\) then using the earlier conventions used to prove Lemma 3.3.1, \\( \\det \\left( M\\right) \\) equals\n\n\\[\n\\mathop{\\sum }\\limits_{\\left( {k}_{1},\\cdots ,{k}_{n}\\right) }{\\left( -1\\right) }^{n - \\theta }{\\operatorname{sgn}}_{n - 1}\\left( {{k}_{1},\\cdots ,{k}_{\\theta - 1},{k}_{\\theta + 1}^{\\theta },\\cdots ,\\overset{n - 1}{{k}_{n}}}\\right) {m}_{1{k}_{1}}\\cdots {m}_{n{k}_{n}}\n\\]\n\nNow suppose (3.14). Then if \\( {k}_{n} \\neq n \\), the term involving \\( {m}_{n{k}_{n}} \\) in the above expression equals zero. Therefore, the only terms which survive are those for which \\( \\theta = n \\) or in other words, those for which \\( {k}_{n} = n \\) . Therefore, the above expression reduces to\n\n\\[\na\\mathop{\\sum }\\limits_{\\left( {k}_{1},\\cdots ,{k}_{n - 1}\\right) }{\\operatorname{sgn}}_{n - 1}\\left( {{k}_{1},\\cdots {k}_{n - 1}}\\right) {m}_{1{k}_{1}}\\cdots {m}_{\\left( {n - 1}\\right) {k}_{n - 1}} = a\\det \\left( A\\right) .\n\\]\n\nTo get the assertion in the situation of (3.13) use Corollary 3.3.8 and (3.14) to write\n\n\\[\n\\det \\left( M\\right) = \\det \\left( {M}^{T}\\right) = \\det \\left( \\left( \\begin{matrix} {A}^{T} & \\mathbf{0} \\\\ * & a \\end{matrix}\\right) \\right) = a\\det \\left( {A}^{T}\\right) = a\\det \\left( A\\right) .\\blacksquare\n\\]
Yes
Theorem 3.3.17 Let \( A \) be an \( n \times n \) matrix where \( n \geq 2 \) . Then\n\n\[ \det \left( A\right) = \mathop{\sum }\limits_{{j = 1}}^{n}{a}_{ij}\operatorname{cof}{\left( A\right) }_{ij} = \mathop{\sum }\limits_{{i = 1}}^{n}{a}_{ij}\operatorname{cof}{\left( A\right) }_{ij}. \]\n\n\( \left( {3.15}\right) \)\n\nThe first formula consists of expanding the determinant along the \( {i}^{\text{th }} \) row and the second expands the determinant along the \( {j}^{\text{th }} \) column.
Proof: Let \( \left( {{a}_{i1},\cdots ,{a}_{in}}\right) \) be the \( {i}^{th} \) row of \( A \) . Let \( {B}_{j} \) be the matrix obtained from \( A \) by leaving every row the same except the \( {i}^{\text{th }} \) row which in \( {B}_{j} \) equals \( \left( {0,\cdots ,0,{a}_{ij},0,\cdots ,0}\right) \) . Then by Corollary 3.3.9,\n\n\[ \det \left( A\right) = \mathop{\sum }\limits_{{j = 1}}^{n}\det \left( {B}_{j}\right) \]\n\nDenote by \( {A}^{ij} \) the \( \left( {n - 1}\right) \times \left( {n - 1}\right) \) matrix obtained by deleting the \( {i}^{th} \) row and the \( {j}^{th} \) column of \( A \) . Thus \( \operatorname{cof}{\left( A\right) }_{ij} \equiv {\left( -1\right) }^{i + j}\det \left( {A}^{ij}\right) \) . At this point, recall that from Proposition 3.3.6, when two rows or two columns in a matrix \( M \), are switched, this results in multiplying the determinant of the old matrix by -1 to get the determinant of the new matrix. Therefore, by Lemma 3.3.15,\n\n\[ \det \left( {B}_{j}\right) = {\left( -1\right) }^{n - j}{\left( -1\right) }^{n - i}\det \left( \left( \begin{matrix} {A}^{ij} & * \\ \mathbf{0} & {a}_{ij} \end{matrix}\right) \right) \]\n\n\[ = {\left( -1\right) }^{i + j}\det \left( \left( \begin{matrix} {A}^{ij} & * \\ \mathbf{0} & {a}_{ij} \end{matrix}\right) \right) = {a}_{ij}\operatorname{cof}{\left( A\right) }_{ij}. \]\n\nTherefore,\n\n\[ \det \left( A\right) = \mathop{\sum }\limits_{{j = 1}}^{n}{a}_{ij}\operatorname{cof}{\left( A\right) }_{ij} \]\n\nwhich is the formula for expanding \( \det \left( A\right) \) along the \( {i}^{\text{th }} \) row. Also,\n\n\[ \det \left( A\right) = \det \left( {A}^{T}\right) = \mathop{\sum }\limits_{{j = 1}}^{n}{a}_{ij}^{T}\operatorname{cof}{\left( {A}^{T}\right) }_{ij} = \mathop{\sum }\limits_{{j = 1}}^{n}{a}_{ji}\operatorname{cof}{\left( A\right) }_{ji} \]\n\nwhich is the formula for expanding \( \det \left( A\right) \) along the \( {i}^{\text{th }} \) column.
Yes
Theorem 3.3.18 \( {A}^{-1} \) exists if and only if \( \det \left( A\right) \neq 0 \) . If \( \det \left( A\right) \neq 0 \), then \( {A}^{-1} = \left( {a}_{ij}^{-1}\right) \) where\n\n\[ \n{a}_{ij}^{-1} = \det {\left( A\right) }^{-1}\operatorname{cof}{\left( A\right) }_{ji} \n\]\n\nfor \( \operatorname{cof}{\left( A\right) }_{ij} \) the \( i{j}^{\text{th }} \) cofactor of \( A \) .
Proof: By Theorem 3.3.12 and letting \( \left( {a}_{ir}\right) = A \), if \( \det \left( A\right) \neq 0 \) ,\n\n\[ \n\mathop{\sum }\limits_{{i = 1}}^{n}{a}_{ir}\operatorname{cof}{\left( A\right) }_{ir}\det {\left( A\right) }^{-1} = \det \left( A\right) \det {\left( A\right) }^{-1} = 1. \n\]\n\nNow in the matrix \( A \), replace the \( {k}^{th} \) column with the \( {r}^{th} \) column and then expand along the \( {k}^{\text{th }} \) column. This yields for \( k \neq r \) ,\n\n\[ \n\mathop{\sum }\limits_{{i = 1}}^{n}{a}_{ir}\operatorname{cof}{\left( A\right) }_{ik}\det {\left( A\right) }^{-1} = 0 \n\]\n\nbecause there are two equal columns by Corollary 3.3.9. Summarizing,\n\n\[ \n\mathop{\sum }\limits_{{i = 1}}^{n}{a}_{ir}\operatorname{cof}{\left( A\right) }_{ik}\det {\left( A\right) }^{-1} = {\delta }_{rk} \n\]\n\nUsing the other formula in Theorem 3.3.17, and similar reasoning,\n\n\[ \n\mathop{\sum }\limits_{{j = 1}}^{n}{a}_{rj}\operatorname{cof}{\left( A\right) }_{kj}\det {\left( A\right) }^{-1} = {\delta }_{rk} \n\]\n\nThis proves that if \( \det \left( A\right) \neq 0 \), then \( {A}^{-1} \) exists with \( {A}^{-1} = \left( {a}_{ij}^{-1}\right) \), where\n\n\[ \n{a}_{ij}^{-1} = \operatorname{cof}{\left( A\right) }_{ji}\det {\left( A\right) }^{-1}. \n\]\n\nNow suppose \( {A}^{-1} \) exists. Then by Theorem 3.3.13,\n\n\[ \n1 = \det \left( I\right) = \det \left( {A{A}^{-1}}\right) = \det \left( A\right) \det \left( {A}^{-1}\right) \n\]\n\nso \( \det \left( A\right) \neq 0 \) . ∎
Yes
Corollary 3.3.19 Let \( A \) be an \( n \times n \) matrix and suppose there exists an \( n \times n \) matrix \( B \) such that \( {BA} = I \) . Then \( {A}^{-1} \) exists and \( {A}^{-1} = B \) . Also, if there exists \( C \) an \( n \times n \) matrix such that \( {AC} = I \), then \( {A}^{-1} \) exists and \( {A}^{-1} = C \) .
Proof: Since \( {BA} = I \), Theorem 3.3.13 implies\n\n\[ \det B\det A = 1 \]\n\nand so \( \det A \neq 0 \) . Therefore from Theorem 3.3.18, \( {A}^{-1} \) exists. Therefore,\n\n\[ {A}^{-1} = \left( {BA}\right) {A}^{-1} = B\left( {A{A}^{-1}}\right) = {BI} = B. \]\n\nThe case where \( {CA} = I \) is handled similarly. ∎
Yes
Theorem 3.3.23 If \( A \), an \( m \times n \) matrix has determinant rank \( r \), then there exist \( r \) rows of the matrix such that every other row is a linear combination of these \( r \) rows.
Proof: Suppose the determinant rank of \( A = \left( {a}_{ij}\right) \) equals \( r \) . Thus some \( r \times r \) submatrix has non zero determinant and there is no larger square submatrix which has non zero determinant. Suppose such a submatrix is determined by the \( r \) columns whose indices are\n\n\[ \n{j}_{1} < \cdots < {j}_{r} \n\]\n\nand the \( r \) rows whose indices are\n\n\[ \n{i}_{1} < \cdots < {i}_{r} \n\]\n\nI want to show that every row is a linear combination of these rows. Consider the \( {l}^{th} \) row and let \( p \) be an index between 1 and \( n \) . Form the following \( \left( {r + 1}\right) \times \left( {r + 1}\right) \) matrix\n\n\[ \n\left( \begin{array}{llll} {a}_{{i}_{1}{j}_{1}} & \cdots & {a}_{{i}_{1}{j}_{r}} & {a}_{{i}_{1}p} \\ \vdots & & \vdots & \vdots \\ {a}_{{i}_{r}{j}_{1}} & \cdots & {a}_{{i}_{r}{j}_{r}} & {a}_{{i}_{r}p} \\ {a}_{l{j}_{1}} & \cdots & {a}_{l{j}_{r}} & {a}_{lp} \end{array}\right) \n\]\n\nOf course you can assume \( l \notin \left\{ {{i}_{1},\cdots ,{i}_{r}}\right\} \) because there is nothing to prove if the \( {l}^{th} \) row is one of the chosen ones. The above matrix has determinant 0 . This is because if \( p \notin \left\{ {{j}_{1},\cdots ,{j}_{r}}\right\} \) then the above would be a submatrix of \( A \) which is too large to have non zero determinant. On the other hand, if \( p \in \left\{ {{j}_{1},\cdots ,{j}_{r}}\right\} \) then the above matrix has two columns which are equal so its determinant is still 0 .\n\nExpand the determinant of the above matrix along the last column. Let \( {C}_{k} \) denote the cofactor associated with the entry \( {a}_{{i}_{k}p} \) . This is not dependent on the choice of \( p \) . Remember, you delete the column and the row the entry is in and take the determinant of what is left and multiply by -1 raised to an appropriate power. Let \( C \) denote the cofactor associated with \( {a}_{lp} \) . This is given to be nonzero, it being the determinant of the matrix\n\n\[ \n\left( \begin{array}{lll} {a}_{{i}_{1}{j}_{1}} & \cdots & {a}_{{i}_{1}{j}_{r}} \\ \vdots & & \vdots \\ {a}_{{i}_{r}{j}_{1}} & \cdots & {a}_{{i}_{r}{j}_{r}} \end{array}\right) \n\]\n\nThus\n\n\[ \n0 = {a}_{lp}C + \mathop{\sum }\limits_{{k = 1}}^{r}{C}_{k}{a}_{{i}_{k}p} \n\]\n\nwhich implies\n\n\[ \n{a}_{lp} = \mathop{\sum }\limits_{{k = 1}}^{r}\frac{-{C}_{k}}{C}{a}_{{i}_{k}p} \equiv \mathop{\sum }\limits_{{k = 1}}^{r}{m}_{k}{a}_{{i}_{k}p} \n\]\n\nSince this is true for every \( p \) and since \( {m}_{k} \) does not depend on \( p \), this has shown the \( {l}^{th} \) row is a linear combination of the \( {i}_{1},{i}_{2},\cdots ,{i}_{r} \) rows.
Yes
Corollary 3.3.24 The determinant rank equals the row rank.
Proof: From Theorem 3.3.23, every row is in the span of \( r \) rows where \( r \) is the determinant rank. Therefore, the row rank (dimension of the span of the rows) is no larger than the determinant rank. Could the row rank be smaller than the determinant rank? If so, it follows from Theorem 3.3.23 that there exist \( p \) rows for \( p < r \equiv \) determinant rank, such that the span of these \( p \) rows equals the row space. But then you could consider the \( r \times r \) sub matrix which determines the determinant rank and it would follow that each of these rows would be in the span of the restrictions of the \( p \) rows just mentioned. By Theorem 2.4.4, the exchange theorem, the rows of this sub matrix would not be linearly independent and so some row is a linear combination of the others. By Corollary 3.3.1 the determinant would be 0 , a contradiction.
Yes
Corollary 3.3.25 If \( A \) has determinant rank \( r \), then there exist \( r \) columns of the matrix such that every other column is a linear combination of these \( r \) columns. Also the column rank equals the determinant rank.
Proof: This follows from the above by considering \( {A}^{T} \) . The rows of \( {A}^{T} \) are the columns of \( A \) and the determinant rank of \( {A}^{T} \) and \( A \) are the same. Therefore, from Corollary 3.3.24, column rank of \( A = \) row rank of \( {A}^{T} = \) determinant rank of \( {A}^{T} = \) determinant rank of \( A \) .
Yes
Theorem 3.3.26 Let \( A \) be an \( n \times n \) matrix. Then the following are equivalent.\n\n\[ \text{1.}\det \left( A\right) = 0\text{.} \]\n\n2. \( A,{A}^{T} \) are not one to one.\n\n3. \( A \) is not onto.
Proof: Suppose \( \det \left( A\right) = 0 \) . Then the determinant rank of \( A = r < n \) . Therefore, there exist \( r \) columns such that every other column is a linear combination of these columns by Theorem 3.3.23. In particular, it follows that for some \( m \), the \( {m}^{\text{th }} \) column is a linear combination of all the others. Thus letting \( A = \left( \begin{array}{lllll} {\mathbf{a}}_{1} & \cdots & {\mathbf{a}}_{m} & \cdots & {\mathbf{a}}_{n} \end{array}\right) \) where the columns are denoted by \( {\mathbf{a}}_{i} \), there exists scalars \( {\alpha }_{i} \) such that\n\n\[ {\mathbf{a}}_{m} = \mathop{\sum }\limits_{{k \neq m}}{\alpha }_{k}{\mathbf{a}}_{k} \]\n\nNow consider the column vector, \( \mathbf{x} \equiv {\left( \begin{array}{lllll} {\alpha }_{1} & \cdots & - 1 & \cdots & {\alpha }_{n} \end{array}\right) }^{T} \) . Then\n\n\[ A\mathbf{x} = - {\mathbf{a}}_{m} + \mathop{\sum }\limits_{{k \neq m}}{\alpha }_{k}{\mathbf{a}}_{k} = \mathbf{0}. \]\n\nSince also \( A\mathbf{0} = \mathbf{0} \), it follows \( A \) is not one to one. Similarly, \( {A}^{T} \) is not one to one by the same argument applied to \( {A}^{T} \) . This verifies that 1.) implies 2.).\n\nNow suppose 2.). Then since \( {A}^{T} \) is not one to one, it follows there exists \( \mathbf{x} \neq \mathbf{0} \) such that\n\n\[ {A}^{T}\mathbf{x} = \mathbf{0}. \]\n\nTaking the transpose of both sides yields\n\n\[ {\mathbf{x}}^{T}A = {\mathbf{0}}^{T} \]\n\nwhere the \( {\mathbf{0}}^{T} \) is a \( 1 \times n \) matrix or row vector. Now if \( A\mathbf{y} = \mathbf{x} \), then\n\n\[ {\left| \mathbf{x}\right| }^{2} = {\mathbf{x}}^{T}\left( {A\mathbf{y}}\right) = \left( {{\mathbf{x}}^{T}A}\right) \mathbf{y} = \mathbf{0}\mathbf{y} = 0 \]\n\ncontrary to \( \mathbf{x} \neq \mathbf{0} \) . Consequently there can be no \( \mathbf{y} \) such that \( A\mathbf{y} = \mathbf{x} \) and so \( A \) is not onto. This shows that 2.) implies 3.).\n\nFinally, suppose 3.). If 1.) does not hold, then \( \det \left( A\right) \neq 0 \) but then from Theorem 3.3.18 \( {A}^{-1} \) exists and so for every \( \mathbf{y} \in {\mathbb{F}}^{n} \) there exists a unique \( \mathbf{x} \in {\mathbb{F}}^{n} \) such that \( A\mathbf{x} = \mathbf{y} \) . In fact \( \mathbf{x} = {A}^{-1}\mathbf{y} \) . Thus \( A \) would be onto contrary to 3.). This shows 3.) implies 1.).
Yes
Corollary 3.3.27 Let \( A \) be an \( n \times n \) matrix. Then the following are equivalent.\n\n1. \( \det \left( A\right) \neq 0 \) .\n\n2. \( A \) and \( {A}^{T} \) are one to one.\n\n3. \( A \) is onto.
Proof: This follows immediately from the above theorem.
No
Lemma 3.4.2 Suppose for all \( \left| \lambda \right| \) large enough,\n\n\[{A}_{0} + {A}_{1}\lambda + \cdots + {A}_{m}{\lambda }^{m} = 0,\]\n\nwhere the \( {A}_{i} \) are \( n \times n \) matrices. Then each \( {A}_{i} = 0 \) .
Proof: Multiply by \( {\lambda }^{-m} \) to obtain\n\n\[{A}_{0}{\lambda }^{-m} + {A}_{1}{\lambda }^{-m + 1} + \cdots + {A}_{m - 1}{\lambda }^{-1} + {A}_{m} = 0.\]\n\nNow let \( \left| \lambda \right| \rightarrow \infty \) to obtain \( {A}_{m} = 0 \) . With this, multiply by \( \lambda \) to obtain\n\n\[{A}_{0}{\lambda }^{-m + 1} + {A}_{1}{\lambda }^{-m + 2} + \cdots + {A}_{m - 1} = 0.\]\n\nNow let \( \left| \lambda \right| \rightarrow \infty \) to obtain \( {A}_{m - 1} = 0 \) . Continue multiplying by \( \lambda \) and letting \( \lambda \rightarrow \infty \) to obtain that all the \( {A}_{i} = 0 \) .
Yes
Corollary 3.4.3 Let \( {A}_{i} \) and \( {B}_{i} \) be \( n \times n \) matrices and suppose\n\n\[ \n{A}_{0} + {A}_{1}\lambda + \cdots + {A}_{m}{\lambda }^{m} = {B}_{0} + {B}_{1}\lambda + \cdots + {B}_{m}{\lambda }^{m} \n\]\n\nfor all \( \left| \lambda \right| \) large enough. Then \( {A}_{i} = {B}_{i} \) for all \( i \) . Consequently if \( \lambda \) is replaced by any \( n \times n \) matrix, the two sides will be equal. That is, for \( C \) any \( n \times n \) matrix,\n\n\[ \n{A}_{0} + {A}_{1}C + \cdots + {A}_{m}{C}^{m} = {B}_{0} + {B}_{1}C + \cdots + {B}_{m}{C}^{m}. \n\]
Proof: Subtract and use the result of the lemma. -
No
Theorem 3.4.4 Let \( A \) be an \( n \times n \) matrix and let \( p\left( \lambda \right) \equiv \det \left( {{\lambda I} - A}\right) \) be the characteristic polynomial. Then \( p\left( A\right) = 0 \) .
Proof: Let \( C\left( \lambda \right) \) equal the transpose of the cofactor matrix of \( \left( {{\lambda I} - A}\right) \) for \( \left| \lambda \right| \) large. (If \( \left| \lambda \right| \) is large enough, then \( \lambda \) cannot be in the finite list of eigenvalues of \( A \) and so for such \( \lambda ,{\left( \lambda I - A\right) }^{-1} \) exists.) Therefore, by Theorem 3.3.18\n\n\[ C\left( \lambda \right) = p\left( \lambda \right) {\left( \lambda I - A\right) }^{-1}. \]\n\nNote that each entry in \( C\left( \lambda \right) \) is a polynomial in \( \lambda \) having degree no more than \( n - 1 \) . Therefore, collecting the terms,\n\n\[ C\left( \lambda \right) = {C}_{0} + {C}_{1}\lambda + \cdots + {C}_{n - 1}{\lambda }^{n - 1} \]\n\nfor \( {C}_{j} \) some \( n \times n \) matrix. It follows that for all \( \left| \lambda \right| \) large enough,\n\n\[ \left( {{\lambda I} - A}\right) \left( {{C}_{0} + {C}_{1}\lambda + \cdots + {C}_{n - 1}{\lambda }^{n - 1}}\right) = p\left( \lambda \right) I \]\n\nand so Corollary 3.4.3 may be used. It follows the matrix coefficients corresponding to equal powers of \( \lambda \) are equal on both sides of this equation. Therefore, if \( \lambda \) is replaced with \( A \), the two sides will be equal. Thus\n\n\[ 0 = \left( {A - A}\right) \left( {{C}_{0} + {C}_{1}A + \cdots + {C}_{n - 1}{A}^{n - 1}}\right) = p\left( A\right) I = p\left( A\right) .\blacksquare \]
Yes
Lemma 3.5.1 Consider the following product.\n\n\\[ \n\\left( \\begin{array}{l} 0 \\\\ I \\\\ 0 \\end{array}\\right) \\left( \\begin{array}{lll} 0 & I & 0 \\end{array}\\right) \n\\]\n\nwhere the first is \( n \times r \) and the second is \( r \times n \) . The small identity matrix \( I \) is an \( r \times r \) matrix and there are \( l \) zero rows above \( I \) and \( l \) zero columns to the left of \( I \) in the right matrix. Then the product of these matrices is a block matrix of the form\n\n\\[ \n\\left( \\begin{array}{lll} 0 & 0 & 0 \\\\ 0 & I & 0 \\\\ 0 & 0 & 0 \\end{array}\\right) \n\\]
Proof: From the definition of the way you multiply matrices, the product is\n\n\\[ \n\\left( \\begin{matrix} \\left( \\begin{array}{l} \\mathbf{0} \\\\ I \\\\ \\mathbf{0} \\end{array}\\right) \\mathbf{0} & \\cdots \\left( \\begin{array}{l} \\mathbf{0} \\\\ I \\\\ \\mathbf{0} \\end{array}\\right) \\mathbf{0} & \\left( \\begin{array}{l} \\mathbf{0} \\\\ I \\\\ \\mathbf{0} \\end{array}\\right) {\\mathbf{e}}_{1} & \\cdots & \\left( \\begin{array}{l} \\mathbf{0} \\\\ I \\\\ \\mathbf{0} \\end{array}\\right) {\\mathbf{e}}_{r} & \\left( \\begin{array}{l} \\mathbf{0} \\\\ I \\\\ \\mathbf{0} \\end{array}\\right) \\mathbf{0} & \\cdots & \\left( \\begin{array}{l} \\mathbf{0} \\\\ I \\\\ \\mathbf{0} \\end{array}\\right) \\mathbf{0} \\end{matrix}\\right) \n\\]\n\nwhich yields the claimed result. In the formula \( {\\mathbf{e}}_{j} \) refers to the column vector of length \( r \) which has a 1 in the \( {j}^{\\text{th }} \) position.
Yes
Example 3.5.3 Let an \( n \times n \) matrix have the form \( A = \left( \begin{array}{ll} a & \mathbf{b} \\ \mathbf{c} & P \end{array}\right) \) where \( P \) is \( n - 1 \times n - 1 \) . Multiply it by \( B = \left( \begin{array}{ll} p & \mathbf{q} \\ \mathbf{r} & Q \end{array}\right) \) where \( B \) is also an \( n \times n \) matrix and \( Q \) is \( n - 1 \times n - 1 \) .
\[ \left( \begin{array}{ll} a & \mathbf{b} \\ \mathbf{c} & P \end{array}\right) \left( \begin{array}{ll} p & \mathbf{q} \\ \mathbf{r} & Q \end{array}\right) = \left( \begin{array}{ll} {ap} + \mathbf{{br}} & a\mathbf{q} + \mathbf{b}Q \\ p\mathbf{c} + P\mathbf{r} & \mathbf{{cq}} + {PQ} \end{array}\right) \]
Yes
Theorem 3.5.4 Let \( A \) be an \( m \times n \) matrix and let \( B \) be an \( n \times m \) matrix for \( m \leq n \) . Then\n\n\[ \n{p}_{BA}\left( t\right) = {t}^{n - m}{p}_{AB}\left( t\right) \n\] \n\nso the eigenvalues of \( {BA} \) and \( {AB} \) are the same including multiplicities except that \( {BA} \) has \( n - m \) extra zero eigenvalues. Here \( {p}_{A}\left( t\right) \) denotes the characteristic polynomial of the matrix A.
Proof: Use block multiplication to write\n\n\[ \n\left( \begin{matrix} {AB} & 0 \\ B & 0 \end{matrix}\right) \left( \begin{array}{ll} I & A \\ 0 & I \end{array}\right) = \left( \begin{matrix} {AB} & {ABA} \\ B & {BA} \end{matrix}\right) \n\] \n\n\[ \n\left( \begin{matrix} I & A \\ 0 & I \end{matrix}\right) \left( \begin{matrix} 0 & 0 \\ B & {BA} \end{matrix}\right) = \left( \begin{matrix} {AB} & {ABA} \\ B & {BA} \end{matrix}\right) . \n\] \n\nTherefore,\n\n\[ \n{\left( \begin{matrix} I & A \\ 0 & I \end{matrix}\right) }^{-1}\left( \begin{matrix} {AB} & 0 \\ B & 0 \end{matrix}\right) \left( \begin{matrix} I & A \\ 0 & I \end{matrix}\right) = \left( \begin{matrix} 0 & 0 \\ B & {BA} \end{matrix}\right) \n\] \n\nSince the two matrices above are similar, it follows that \( \left( \begin{matrix} 0 & 0 \\ B & {BA} \end{matrix}\right) \) and \( \left( \begin{matrix} {AB} & 0 \\ B & 0 \end{matrix}\right) \) have the same characteristic polynomials. See Problem 8 on Page 82. Therefore, noting that \( {BA} \) is an \( n \times n \) matrix and \( {AB} \) is an \( m \times m \) matrix,\n\n\[ \n{t}^{m}\det \left( {{tI} - {BA}}\right) = {t}^{n}\det \left( {{tI} - {AB}}\right) \n\] \n\nand so \( \det \left( {{tI} - {BA}}\right) = {p}_{BA}\left( t\right) = {t}^{n - m}\det \left( {{tI} - {AB}}\right) = {t}^{n - m}{p}_{AB}\left( t\right) \) . \( \blacksquare \)
Yes
Theorem 4.1.6 To perform any of the three row operations on a matrix \( A \) it suffices to do the row operation on the identity matrix obtaining an elementary matrix \( E \) and then take the product, EA. Furthermore, each elementary matrix is invertible and its inverse is an elementary matrix.
Proof: The first part of this theorem has been proved in Lemmas 4.1.3 - 4.1.5. It only remains to verify the claim about the inverses. Consider first the elementary matrices corresponding to row operation of type three.\n\n\[ E\left( {-c \times i + j}\right) E\left( {c \times i + j}\right) = I \]\n\nThis follows because the first matrix takes \( c \) times row \( i \) in the identity and adds it to row \( j \) . When multiplied on the left by \( E\left( {-c \times i + j}\right) \) it follows from the first part of this theorem that you take the \( {i}^{\text{th }} \) row of \( E\left( {c \times i + j}\right) \) which coincides with the \( {i}^{\text{th }} \) row of \( I \) since that row was not changed, multiply it by \( - c \) and add to the \( {j}^{\text{th }} \) row of \( E\left( {c \times i + j}\right) \) which was the \( {j}^{\text{th }} \) row of \( I \) added to \( c \) times the \( {i}^{\text{th }} \) row of \( I \) . Thus \( E\left( {-c \times i + j}\right) \) multiplied on the left, undoes the row operation which resulted in \( E\left( {c \times i + j}\right) \) . The same argument applied to the product\n\n\[ E\left( {c \times i + j}\right) E\left( {-c \times i + j}\right) \]\n\nreplacing \( c \) with \( - c \) in the argument yields that this product is also equal to \( I \) . Therefore, \( E{\left( c \times i + j\right) }^{-1} = E\left( {-c \times i + j}\right) \) .\n\nSimilar reasoning shows that for \( E\left( {c, i}\right) \) the elementary matrix which comes from multiplying the \( {i}^{th} \) row by the nonzero constant, \( c \) ,\n\n\[ E{\left( c, i\right) }^{-1} = E\left( {{c}^{-1}, i}\right) . \]\n\nFinally, consider \( {P}^{ij} \) which involves switching the \( {i}^{th} \) and the \( {j}^{th} \) rows.\n\n\[ {P}^{ij}{P}^{ij} = I \]\n\nbecause by the first part of this theorem, multiplying on the left by \( {P}^{ij} \) switches the \( {i}^{th} \) and \( {j}^{th} \) rows of \( {P}^{ij} \) which was obtained from switching the \( {i}^{th} \) and \( {j}^{th} \) rows of the identity. First you switch them to get \( {P}^{ij} \) and then you multiply on the left by \( {P}^{ij} \) which switches these rows again and restores the identity matrix. Thus \( {\left( {P}^{ij}\right) }^{-1} = {P}^{ij} \) .
Yes
Lemma 4.2.3 Let \( B \) and \( A \) be two \( m \times n \) matrices and suppose \( B \) results from a row operation applied to \( A \) . Then the \( {k}^{\text{th }} \) column of \( B \) is a linear combination of the \( {i}_{1},\cdots ,{i}_{r} \) columns of \( B \) if and only if the \( {k}^{\text{th }} \) column of \( A \) is a linear combination of the \( {i}_{1},\cdots ,{i}_{r} \) columns of \( A \) . Furthermore, the scalars in the linear combination are the same. (The linear relationship between the \( {k}^{\text{th }} \) column of \( A \) and the \( {i}_{1},\cdots ,{i}_{r} \) columns of \( A \) is the same as the linear relationship between the \( {k}^{\text{th }} \) column of \( B \) and the \( {i}_{1},\cdots ,{i}_{r} \) columns of \( B \) .)
Proof: Let \( A \) equal the following matrix in which the \( {\mathbf{a}}_{k} \) are the columns\n\n\[ \left( \begin{array}{llll} {\mathbf{a}}_{1} & {\mathbf{a}}_{2} & \cdots & {\mathbf{a}}_{n} \end{array}\right) \]\n\nand let \( B \) equal the following matrix in which the columns are given by the \( {\mathbf{b}}_{k} \]\n\n\[ \left( \begin{array}{llll} {\mathbf{b}}_{1} & {\mathbf{b}}_{2} & \cdots & {\mathbf{b}}_{n} \end{array}\right) \]\n\nThen by Theorem 4.1.6 on Page III \( {\mathbf{b}}_{k} = E{\mathbf{a}}_{k} \) where \( E \) is an elementary matrix. Suppose then that one of the columns of \( A \) is a linear combination of some other columns of \( A \) . Say\n\n\[ {\mathbf{a}}_{k} = \mathop{\sum }\limits_{{r \in S}}{c}_{r}{\mathbf{a}}_{r} \]\n\nThen multiplying by \( E \) ,\n\n\[ {\mathbf{b}}_{k} = E{\mathbf{a}}_{k} = \mathop{\sum }\limits_{{r \in S}}{c}_{r}E{\mathbf{a}}_{r} = \mathop{\sum }\limits_{{r \in S}}{c}_{r}{\mathbf{b}}_{r}.\blacksquare \]
Yes
Corollary 4.2.4 Let \( A \) and \( B \) be two \( m \times n \) matrices such that \( B \) is obtained by applying a row operation to \( A \) . Then the two matrices have the same rank.
Proof: Lemma 4.2.3 says the linear relationships are the same between the columns of \( A \) and those of \( B \) . Therefore, the column rank of the two matrices is the same.
Yes
Find the rank of the following matrix and identify columns whose linear combinations yield all the other columns.
Take \( \left( {-1}\right) \) times the first row and add to the second and then take \( \left( {-3}\right) \) times the first row and add to the third. This yields\n\n\[ \left( \begin{matrix} 1 & 2 & 1 & 3 & 2 \\ 0 & 1 & 5 & - 3 & 0 \\ 0 & 1 & 5 & - 3 & 0 \end{matrix}\right) \]\n\nBy the above corollary, this matrix has the same rank as the first matrix. Now take (-1) times the second row and add to the third row yielding\n\n\[ \left( \begin{matrix} 1 & 2 & 1 & 3 & 2 \\ 0 & 1 & 5 & - 3 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{matrix}\right) \]\n\nAt this point it is clear the rank is 2 . This is because every column is in the span of the first two and these first two columns are linearly independent.
Yes
Find the rank of the following matrix and identify columns whose linear combinations yield all the other columns.
Take \( \\left( {-1}\\right) \) times the first row and add to the second and then take \( \\left( {-3}\\right) \) times the first row and add to the last row. This yields\n\n\[ \n\\left( \\begin{matrix} 1 & 2 & 1 & 3 & 2 \\ 0 & 0 & 5 & - 3 & 0 \\ 0 & 0 & 5 & - 3 & 0 \\end{matrix}\\right)\n\]\n\nNow multiply the second row by \( 1/5 \) and add 5 times it to the last row.\n\n\[ \n\\left( \\begin{matrix} 1 & 2 & 1 & 3 & 2 \\ 0 & 0 & 1 & - 3/5 & 0 \\ 0 & 0 & 0 & 0 & 0 \\end{matrix}\\right)\n\]\n\nAdd \( \\left( {-1}\\right) \) times the second row to the first.\n\n\[ \n\\left( \\begin{matrix} 1 & 2 & 0 & \\frac{18}{5} & 2 \\ 0 & 0 & 1 & - 3/5 & 0 \\ 0 & 0 & 0 & 0 & 0 \\end{matrix}\\right)\n\]\n\nIt is now clear the rank of this matrix is 2 because the first and third columns form a basis for the column space.\n\nThe matrix (4.3) is the row reduced echelon form for the matrix (4.2).
Yes
Theorem 4.3.2 Let \( A \) be an \( m \times n \) matrix. Then \( A \) has a row reduced echelon form determined by a simple process.
Proof: Viewing the columns of \( A \) from left to right take the first nonzero column. Pick a nonzero entry in this column and switch the row containing this entry with the top row of A. Now divide this new top row by the value of this nonzero entry to get a 1 in this position and then use row operations to make all entries below this entry equal to zero. Thus the first nonzero column is now \( {\mathbf{e}}_{1} \) . Denote the resulting matrix by \( {A}_{1} \) . Consider the submatrix of \( {A}_{1} \) to the right of this column and below the first row. Do exactly the same thing for it that was done for \( A \) . This time the \( {\mathbf{e}}_{1} \) will refer to \( {\mathbb{F}}^{m - 1} \) . Use this 1 and row operations to zero out every entry above it in the rows of \( {A}_{1} \) . Call the resulting matrix \( {A}_{2} \) . Thus \( {A}_{2} \) satisfies the conditions of the above definition up to the column just encountered. Continue this way till every column has been dealt with and the result must be in row reduced echelon form. \( \blacksquare \)
Yes
Corollary 4.3.5 The row reduced echelon form is unique. That is if \( B, C \) are two matrices in row reduced echelon form and both are row equivalent to \( A \), then \( B = C \) .
Proof: Suppose \( B \) and \( C \) are both row reduced echelon forms for the matrix \( A \) . Then they clearly have the same zero columns since row operations leave zero columns unchanged. If \( B \) has the sequence \( {\mathbf{e}}_{1},{\mathbf{e}}_{2},\cdots ,{\mathbf{e}}_{r} \) occurring for the first time in the positions, \( {i}_{1},{i}_{2},\cdots ,{i}_{r} \) , the description of the row reduced echelon form means that each of these columns is not a linear combination of the preceding columns. Therefore, by Lemma 4.2.3, the same is true of the columns in positions \( {i}_{1},{i}_{2},\cdots ,{i}_{r} \) for \( C \) . It follows from the description of the row reduced echelon form, that \( {\mathbf{e}}_{1},\cdots ,{\mathbf{e}}_{r} \) occur respectively for the first time in columns \( {i}_{1},{i}_{2},\cdots ,{i}_{r} \) for \( C \) . Thus \( B, C \) have the same columns in these positions. By Lemma 4.2.3, the other columns in the two matrices are linear combinations, involving the same scalars, of the columns in the \( {i}_{1},\cdots ,{i}_{k} \) position. Thus each column of \( B \) is identical to the corresponding column in \( C \) . \( \blacksquare \)
Yes
Corollary 4.3.6 Let \( A \) be an \( m \times n \) matrix and let \( R \) denote the row reduced echelon form obtained from \( A \) by row operations. Then there exists a sequence of elementary matrices, \( {E}_{1},\cdots ,{E}_{p} \) such that\n\n\[ \left( {{E}_{p}{E}_{p - 1}\cdots {E}_{1}}\right) A = R. \]
Proof: This follows from the fact that row operations are equivalent to multiplication on the left by an elementary matrix.
No
Corollary 4.3.7 Let \( A \) be an invertible \( n \times n \) matrix. Then \( A \) equals a finite product of elementary matrices.
Proof: Since \( {A}^{-1} \) is given to exist, it follows \( A \) must have rank \( n \) because by Theorem 3.3.18 \( \det \left( A\right) \neq 0 \) which says the determinant rank and hence the column rank of \( A \) is \( n \) and so the row reduced echelon form of \( A \) is \( I \) because the columns of \( A \) form a linearly independent set. Therefore, by Corollary 4.3.6 there is a sequence of elementary matrices, \( {E}_{1},\cdots ,{E}_{p} \) such that\n\n\[ \left( {{E}_{p}{E}_{p - 1}\cdots {E}_{1}}\right) A = I.\]\n\nBut now multiply on the left on both sides by \( {E}_{p}^{-1} \) then by \( {E}_{p - 1}^{-1} \) and then by \( {E}_{p - 2}^{-1} \) etc. until you get\n\n\[ A = {E}_{1}^{-1}{E}_{2}^{-1}\cdots {E}_{p - 1}^{-1}{E}_{p}^{-1} \]\n\nand by Theorem 4.1.6 each of these in this product is an elementary matrix.
Yes
Corollary 4.3.8 The rank of a matrix equals the number of nonzero pivot columns. Furthermore, every column is contained in the span of the pivot columns.
Proof: Write the row reduced echelon form for the matrix. From Corollary 4.2.4 this row reduced matrix has the same rank as the original matrix. Deleting all the zero rows and all the columns in the row reduced echelon form which do not correspond to a pivot column, yields an \( r \times r \) identity submatrix in which \( r \) is the number of pivot columns. Thus the rank is at least \( r \) . From Lemma 4.2.3 every column of \( A \) is a linear combination of the pivot columns since this is true by definition for the row reduced echelon form. Therefore, the rank is no more than \( r \) . \( \blacksquare \)
Yes
Corollary 4.3.9 Suppose \( A \) is an \( m \times n \) matrix and that \( m < n \). That is, the number of rows is less than the number of columns. Then one of the columns of \( A \) is a linear combination of the preceding columns of \( A \) .
Proof: Since \( m < n \), not all the columns of \( A \) can be pivot columns. That is, in the row reduced echelon form say \( {\mathbf{e}}_{i} \) occurs for the first time at \( {r}_{i} \) where \( {r}_{1} < {r}_{2} < \cdots < {r}_{p} \) where \( p \leq m \). It follows since \( m < n \), there exists some column in the row reduced echelon form which is a linear combination of the preceding columns. By Lemma 4.2.3 the same is true of the columns of \( A \) .
Yes
Theorem 4.3.12 Let \( A \) be an \( m \times n \) matrix. Then \( \operatorname{rank}\left( A\right) + \dim \left( {\ker \left( A\right) }\right) = n \).
Proof: Since \( \ker \left( A\right) \) is a subspace, there exists a basis for \( \ker \left( A\right) ,\left\{ {{\mathbf{x}}_{1},\cdots ,{\mathbf{x}}_{k}}\right\} \). Also let \( \left\{ {A{\mathbf{y}}_{1},\cdots, A{\mathbf{y}}_{l}}\right\} \) be a basis for \( A\left( {\mathbb{F}}^{n}\right) \). Let \( \mathbf{u} \in {\mathbb{F}}^{n} \). Then there exist unique scalars \( {c}_{i} \) such that\n\n\[ A\mathbf{u} = \mathop{\sum }\limits_{{i = 1}}^{l}{c}_{i}A{\mathbf{y}}_{i} \]\n\nIt follows that\n\n\[ A\left( {\mathbf{u} - \mathop{\sum }\limits_{{i = 1}}^{l}{c}_{i}{\mathbf{y}}_{i}}\right) = \mathbf{0} \]\n\nand so the vector in parenthesis is in \( \ker \left( A\right) \). Thus there exist unique \( {b}_{j} \) such that\n\n\[ \mathbf{u} = \mathop{\sum }\limits_{{i = 1}}^{l}{c}_{i}{\mathbf{y}}_{i} + \mathop{\sum }\limits_{{j = 1}}^{k}{b}_{j}{\mathbf{x}}_{j} \]\n\nSince \( \mathbf{u} \) was arbitrary, this shows \( \left\{ {{\mathbf{x}}_{1},\cdots ,{\mathbf{x}}_{k},{\mathbf{y}}_{1},\cdots ,{\mathbf{y}}_{l}}\right\} \) spans \( {\mathbb{F}}^{n} \). If these vectors are independent, then they will form a basis and the claimed equation will be obtained. Suppose then that\n\n\[ \mathop{\sum }\limits_{{i = 1}}^{l}{c}_{i}{\mathbf{y}}_{i} + \mathop{\sum }\limits_{{j = 1}}^{k}{b}_{j}{\mathbf{x}}_{j} = \mathbf{0} \]\n\nApply \( A \) to both sides. This yields\n\n\[ \mathop{\sum }\limits_{{i = 1}}^{l}{c}_{i}A{\mathbf{y}}_{i} = \mathbf{0} \]\n\nand so each \( {c}_{i} = 0 \). Then the independence of the \( {\mathbf{x}}_{j} \) imply each \( {b}_{j} = 0 \). ∎
Yes
Proposition 4.4.1 Let \( A \) be an \( m \times n \) matrix and let \( \mathbf{b} \) be an \( m \times 1 \) column vector. Then there exists a solution to (4.4) if and only if\n\n\[ \operatorname{rank}\left( \begin{array}{lll} A & \mid & \mathbf{b} \end{array}\right) = \operatorname{rank}\left( A\right) . \]
Proof: Place \( \left( {A \mid \mathbf{b}}\right) \) and \( A \) in row reduced echelon form, respectively \( B \) and \( C \) . If the above condition on rank is true, then both \( B \) and \( C \) have the same number of nonzero rows. In particular, you cannot have a row of the form\n\n\[ \left( \begin{array}{llll} 0 & \cdots & 0 & \star \end{array}\right) \]\n\nwhere \( \bigstar \neq 0 \) in \( B \) . Therefore, there will exist a solution to the system (4.4).\n\nConversely, suppose there exists a solution. This means there cannot be such a row in \( B \) described above. Therefore, \( B \) and \( C \) must have the same number of zero rows and so they have the same number of nonzero rows. Therefore, the rank of the two matrices in (4.5) is the same. \( \blacksquare \)
Yes
Lemma 4.5.2 Let \( A \) be a real \( m \times n \) matrix, let \( \mathbf{x} \in {\mathbb{R}}^{n} \) and \( \mathbf{y} \in {\mathbb{R}}^{m} \) . Then\n\n\[ \left( {A\mathbf{x} \cdot \mathbf{y}}\right) = \left( {\mathbf{x} \cdot {A}^{T}\mathbf{y}}\right) \]
Proof: This follows right away from the definition of the inner product and matrix multiplication.\n\n\[ \left( {A\mathbf{x} \cdot \mathbf{y}}\right) = \mathop{\sum }\limits_{{k, l}}{A}_{kl}{x}_{l}{y}_{k} = \mathop{\sum }\limits_{{k, l}}{\left( {A}^{T}\right) }_{lk}{x}_{l}{y}_{k} = \left( {\mathbf{x} \cdot {A}^{T}\mathbf{y}}\right) .\blacksquare \]
Yes
Theorem 4.5.3 Let \( A \) be a real \( m \times n \) matrix and let \( \mathbf{b} \in {\mathbb{R}}^{m} \) . There exists a solution, \( \mathbf{x} \) to the equation \( A\mathbf{x} = \mathbf{b} \) if and only if \( \mathbf{b} \in \ker {\left( {A}^{T}\right) }^{ \bot } \) .
Proof: First suppose \( \mathbf{b} \in \ker {\left( {A}^{T}\right) }^{ \bot } \) . Then this says that if \( {A}^{T}\mathbf{x} = \mathbf{0} \), it follows that \( \mathbf{b} \cdot \mathbf{x} = \mathbf{0} \) . In other words, taking the transpose, if\n\n\[ \n{\mathbf{x}}^{T}A = \mathbf{0}\text{, then}{\mathbf{x}}^{T}\mathbf{b} = 0\text{.} \n\]\n\nThus, if \( P \) is a product of elementary matrices such that \( {PA} \) is in row reduced echelon form, then if \( {PA} \) has a row of zeros, in the \( {k}^{th} \) position, then there is also a zero in the \( {k}^{th} \) position of \( P\mathbf{b} \) . Thus \( \operatorname{rank}\left( {A \mid \mathbf{b}}\right) = \operatorname{rank}\left( A\right) \) .By Proposition 4.4.1, there exists a solution, \( \mathbf{x} \) to the system \( A\mathbf{x} = \mathbf{b} \) . It remains to go the other direction.\n\nLet \( \mathbf{z} \in \ker \left( {A}^{T}\right) \) and suppose \( A\mathbf{x} = \mathbf{b} \) . I need to verify \( \mathbf{b} \cdot \mathbf{z} = 0 \) . By Lemma 4.5.2,\n\n\[ \n\mathbf{b} \cdot \mathbf{z} = A\mathbf{x} \cdot \mathbf{z} = \mathbf{x} \cdot {A}^{T}\mathbf{z} = \mathbf{x} \cdot \mathbf{0} = 0\blacksquare \n\]
Yes
Corollary 4.5.4 Let \( A \) be an \( m \times n \) matrix. Then \( A \) maps \( {\mathbb{R}}^{n} \) onto \( {\mathbb{R}}^{m} \) if and only if the only solution to \( {A}^{T}\mathbf{x} = \mathbf{0} \) is \( \mathbf{x} = \mathbf{0} \).
Proof: If the only solution to \( {A}^{T}\mathbf{x} = \mathbf{0} \) is \( \mathbf{x} = \mathbf{0} \), then \( \ker \left( {A}^{T}\right) = \{ \mathbf{0}\} \) and so \( \ker {\left( {A}^{T}\right) }^{ \bot } = \) \( {\mathbb{R}}^{m} \) because every \( \mathbf{b} \in {\mathbb{R}}^{m} \) has the property that \( \mathbf{b} \cdot \mathbf{0} = 0 \) . Therefore, \( A\mathbf{x} = \mathbf{b} \) has a solution for any \( \mathbf{b} \in {\mathbb{R}}^{m} \) because the \( \mathbf{b} \) for which there is a solution are those in \( \ker {\left( {A}^{T}\right) }^{ \bot } \) by Theorem 4.5.3. In other words, \( A \) maps \( {\mathbb{R}}^{n} \) onto \( {\mathbb{R}}^{m} \) .\n\nConversely if \( A \) is onto, then by Theorem 4.5.3 every \( \mathbf{b} \in {\mathbb{R}}^{m} \) is in \( \ker {\left( {A}^{T}\right) }^{ \bot } \) and so if \( {A}^{T}\mathbf{x} = \mathbf{0} \), then \( \mathbf{b} \cdot \mathbf{x} = 0 \) for every \( \mathbf{b} \) . In particular, this holds for \( \mathbf{b} = \mathbf{x} \) . Hence if \( {A}^{T}\mathbf{x} = \mathbf{0} \) , then \( \mathbf{x} = \mathbf{0} \) . \( \blacksquare \)
Yes
Example 4.5.5 Let \( A \) be an \( m \times n \) matrix in which \( m > n \) . Then \( A \) cannot map onto \( {\mathbb{R}}^{m} \) .
The reason for this is that \( {A}^{T} \) is an \( n \times m \) where \( m > n \) and so in the augmented matrix\n\n\[ \left( {{A}^{T} \mid \mathbf{0}}\right) \]\n\n there must be some free variables. Thus there exists a nonzero vector \( \mathbf{x} \) such that \( {A}^{T}\mathbf{x} = \mathbf{0} \) .
Yes
Can you write \( \left( \begin{array}{ll} 0 & 1 \\ 1 & 0 \end{array}\right) \) in the form LU as just described?
To do so you would need\n\n\[ \left( \begin{array}{ll} 1 & 0 \\ x & 1 \end{array}\right) \left( \begin{array}{ll} a & b \\ 0 & c \end{array}\right) = \left( \begin{matrix} a & b \\ {xa} & {xb} + c \end{matrix}\right) = \left( \begin{array}{ll} 0 & 1 \\ 1 & 0 \end{array}\right) . \]\n\nTherefore, \( b = 1 \) and \( a = 0 \) . Also, from the bottom rows, \( {xa} = 1 \) which can’t happen and have \( a = 0 \) . Therefore, you can’t write this matrix in the form \( {LU} \) . It has no \( {LU} \) factorization. This is what I mean above by saying the method lacks generality.
Yes
Find an LU factorization for \( A = \left( \begin{matrix} 1 & 2 & 3 \\ 2 & 1 & - 4 \\ 1 & 5 & 2 \end{matrix}\right) \)
Write the matrix next to the identity matrix as shown.\n\n\[ \left( \begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{matrix}\right) \left( \begin{matrix} 1 & 2 & 3 \\ 2 & 1 & - 4 \\ 1 & 5 & 2 \end{matrix}\right) \]\n\nThe process involves doing row operations to the matrix on the right while simultaneously updating successive columns of the matrix on the left. First take -2 times the first row and add to the second in the matrix on the right.\n\n\[ \left( \begin{matrix} 1 & 0 & 0 \\ 2 & 1 & 0 \\ 0 & 0 & 1 \end{matrix}\right) \left( \begin{matrix} 1 & 2 & 3 \\ 0 & - 3 & - {10} \\ 1 & 5 & 2 \end{matrix}\right) \]\n\nNote the method for updating the matrix on the left. The 2 in the second entry of the first column is there because -2 times the first row of \( A \) added to the second row of \( A \) produced a 0 . Now replace the third row in the matrix on the right by -1 times the first row added to the third. Thus the next step is\n\n\[ \left( \begin{matrix} 1 & 0 & 0 \\ 2 & 1 & 0 \\ 1 & 0 & 1 \end{matrix}\right) \left( \begin{matrix} 1 & 2 & 3 \\ 0 & - 3 & - {10} \\ 0 & 3 & - 1 \end{matrix}\right) \]\n\nFinally, add the second row to the bottom row and make the following changes\n\n\[ \left( \begin{matrix} 1 & 0 & 0 \\ 2 & 1 & 0 \\ 1 & - 1 & 1 \end{matrix}\right) \left( \begin{matrix} 1 & 2 & 3 \\ 0 & - 3 & - {10} \\ 0 & 0 & - {11} \end{matrix}\right) \]\n\nAt this point, stop because the matrix on the right is upper triangular. An \( {LU} \) factorization is the above.
Yes
Find an LU factorization for \( A = \left( \begin{array}{lllll} 1 & 2 & 1 & 2 & 1 \\ 2 & 0 & 2 & 1 & 1 \\ 2 & 3 & 1 & 3 & 2 \\ 1 & 0 & 1 & 1 & 2 \end{array}\right) \) .
This time everything is done at once for a whole column. This saves trouble. First multiply the first row by \( \left( {-1}\right) \) and then add to the last row. Next take \( \left( {-2}\right) \) times the first and add to the second and then \( \left( {-2}\right) \) times the first and add to the third.\n\n\[ \left( \begin{matrix} 1 & 0 & 0 & 0 \\ 2 & 1 & 0 & 0 \\ 2 & 0 & 1 & 0 \\ 1 & 0 & 0 & 1 \end{matrix}\right) \left( \begin{matrix} 1 & 2 & 1 & 2 & 1 \\ 0 & - 4 & 0 & - 3 & - 1 \\ 0 & - 1 & - 1 & - 1 & 0 \\ 0 & - 2 & 0 & - 1 & 1 \end{matrix}\right) \]\n\nThis finishes the first column of \( L \) and the first column of \( U \) . Now take \( - \left( {1/4}\right) \) times the second row in the matrix on the right and add to the third followed by \( - \left( {1/2}\right) \) times the second added to the last.\n\n\[ \left( \begin{matrix} 1 & 0 & 0 & 0 \\ 2 & 1 & 0 & 0 \\ 2 & 1/4 & 1 & 0 \\ 1 & 1/2 & 0 & 1 \end{matrix}\right) \left( \begin{matrix} 1 & 2 & 1 & 2 & 1 \\ 0 & - 4 & 0 & - 3 & - 1 \\ 0 & 0 & - 1 & - 1/4 & 1/4 \\ 0 & 0 & 0 & 1/2 & 3/2 \end{matrix}\right) \]\n\nThis finishes the second column of \( L \) as well as the second column of \( U \) . Since the matrix on the right is upper triangular, stop. The \( {LU} \) factorization has now been obtained. This technique is called Dolittle's method.
Yes
Suppose you want to find the solutions to \( \left( \begin{array}{llll} 1 & 2 & 3 & 2 \\ 4 & 3 & 1 & 1 \\ 1 & 2 & 3 & 0 \end{array}\right) \left( \begin{array}{l} x \\ y \\ z \\ w \end{array}\right) = \left( \begin{array}{l} 1 \\ 2 \\ 3 \end{array}\right) \)
Of course one way is to write the augmented matrix and grind away. However, this involves more row operations than the computation of an \( {LU} \) factorization and it turns out that an \( {LU} \) factorization can give the solution quickly. Here is how. The following is an \( {LU} \) factorization for the matrix.\n\n\[ \left( \begin{array}{llll} 1 & 2 & 3 & 2 \\ 4 & 3 & 1 & 1 \\ 1 & 2 & 3 & 0 \end{array}\right) = \left( \begin{array}{lll} 1 & 0 & 0 \\ 4 & 1 & 0 \\ 1 & 0 & 1 \end{array}\right) \left( \begin{matrix} 1 & 2 & 3 & 2 \\ 0 & - 5 & - {11} & - 7 \\ 0 & 0 & 0 & - 2 \end{matrix}\right) . \]\n\nLet \( U\mathbf{x} = \mathbf{y} \) and consider \( L\mathbf{y} = \mathbf{b} \) where in this case, \( \mathbf{b} = {\left( 1,2,3\right) }^{T} \) . Thus\n\n\[ \left( \begin{array}{lll} 1 & 0 & 0 \\ 4 & 1 & 0 \\ 1 & 0 & 1 \end{array}\right) \left( \begin{array}{l} {y}_{1} \\ {y}_{2} \\ {y}_{3} \end{array}\right) = \left( \begin{array}{l} 1 \\ 2 \\ 3 \end{array}\right) \]\n\nwhich yields very quickly that \( \mathbf{y} = \left( \begin{matrix} 1 \\ - 2 \\ 2 \end{matrix}\right) \) . Now you can find \( \mathbf{x} \) by solving \( U\mathbf{x} = \mathbf{y} \) . Thus\n\nin this case,\n\n\[ \left( \begin{matrix} 1 & 2 & 3 & 2 \\ 0 & - 5 & - {11} & - 7 \\ 0 & 0 & 0 & - 2 \end{matrix}\right) \left( \begin{array}{l} x \\ y \\ z \\ w \end{array}\right) = \left( \begin{matrix} 1 \\ - 2 \\ 2 \end{matrix}\right) \]\n\nwhich yields\n\n\[ \mathbf{x} = \left( \begin{matrix} - \frac{3}{5} + \frac{7}{5}t \\ \frac{9}{5} - \frac{11}{5}t \\ t \\ - 1 \end{matrix}\right), t \in \mathbb{R}. \]
Yes
Example 5.4.1 Find a PLU factorization for the above matrix in (5.1).
Proceed as before trying to find the row echelon form of the matrix. First add -1 times the first row to the second row and then add -4 times the first to the third. This yields\n\n\[ \left( \begin{matrix} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 4 & 0 & 1 \end{matrix}\right) \left( \begin{matrix} 1 & 2 & 3 & 2 \\ 0 & 0 & 0 & - 2 \\ 0 & - 5 & - {11} & - 7 \end{matrix}\right) \]\n\nThere is no way to do only row operations involving replacing a row with itself added to a multiple of another row to the second matrix in such a way as to obtain an upper triangular matrix. Therefore, consider \( M \) with the bottom two rows switched.\n\n\[ {M}^{\prime } = \left( \begin{array}{llll} 1 & 2 & 3 & 2 \\ 4 & 3 & 1 & 1 \\ 1 & 2 & 3 & 0 \end{array}\right) \]\n\nNow try again with this matrix. First take -1 times the first row and add to the bottom row and then take -4 times the first row and add to the second row. This yields\n\n\[ \left( \begin{matrix} 1 & 0 & 0 \\ 4 & 1 & 0 \\ 1 & 0 & 1 \end{matrix}\right) \left( \begin{matrix} 1 & 2 & 3 & 2 \\ 0 & - 5 & - {11} & - 7 \\ 0 & 0 & 0 & - 2 \end{matrix}\right) \]\n\nThe second matrix is upper triangular and so the \( {LU} \) factorization of the matrix \( {M}^{\prime } \) is\n\n\[ \left( \begin{matrix} 1 & 0 & 0 \\ 4 & 1 & 0 \\ 1 & 0 & 1 \end{matrix}\right) \left( \begin{matrix} 1 & 2 & 3 & 2 \\ 0 & - 5 & - {11} & - 7 \\ 0 & 0 & 0 & - 2 \end{matrix}\right) \]\n\nThus \( {M}^{\prime } = {PM} = {LU} \) where \( L \) and \( U \) are given above. Therefore, \( M = {P}^{2}M = {PLU} \) and\n\nso\n\[ \left( \begin{array}{llll} 1 & 2 & 3 & 2 \\ 1 & 2 & 3 & 0 \\ 4 & 3 & 1 & 1 \end{array}\right) = \left( \begin{array}{lll} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{array}\right) \left( \begin{array}{lll} 1 & 0 & 0 \\ 4 & 1 & 0 \\ 1 & 0 & 1 \end{array}\right) \left( \begin{matrix} 1 & 2 & 3 & 2 \\ 0 & - 5 & - {11} & - 7 \\ 0 & 0 & 0 & - 2 \end{matrix}\right) \]
Yes
Use a PLU factorization of \( M \equiv \left( \begin{array}{llll} 1 & 2 & 3 & 2 \\ 1 & 2 & 3 & 0 \\ 4 & 3 & 1 & 1 \end{array}\right) \) to solve the system \( M\mathbf{x} = \mathbf{b} \) where \( \mathbf{b} = {\left( 1,2,3\right) }^{T} \).
Let \( U\mathbf{x} = \mathbf{y} \) and consider \( {PL}\mathbf{y} = \mathbf{b} \) . In other words, solve,\n\n\[ \left( \begin{array}{lll} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{array}\right) \left( \begin{array}{lll} 1 & 0 & 0 \\ 4 & 1 & 0 \\ 1 & 0 & 1 \end{array}\right) \left( \begin{array}{l} {y}_{1} \\ {y}_{2} \\ {y}_{3} \end{array}\right) = \left( \begin{array}{l} 1 \\ 2 \\ 3 \end{array}\right) . \]\n\nThen multiplying both sides by \( P \) gives\n\n\[ \left( \begin{array}{lll} 1 & 0 & 0 \\ 4 & 1 & 0 \\ 1 & 0 & 1 \end{array}\right) \left( \begin{array}{l} {y}_{1} \\ {y}_{2} \\ {y}_{3} \end{array}\right) = \left( \begin{array}{l} 1 \\ 3 \\ 2 \end{array}\right) \]\n\nand so\n\[ \mathbf{y} = \left( \begin{array}{l} {y}_{1} \\ {y}_{2} \\ {y}_{3} \end{array}\right) = \left( \begin{matrix} 1 \\ - 1 \\ 1 \end{matrix}\right) \]\n\nNow \( U\mathbf{x} = \mathbf{y} \) and so it only remains to solve\n\n\[ \left( \begin{matrix} 1 & 2 & 3 & 2 \\ 0 & - 5 & - {11} & - 7 \\ 0 & 0 & 0 & - 2 \end{matrix}\right) \left( \begin{array}{l} {x}_{1} \\ {x}_{2} \\ {x}_{3} \\ {x}_{4} \end{array}\right) = \left( \begin{matrix} 1 \\ - 1 \\ 1 \end{matrix}\right) \]\n\nwhich yields\n\[ \left( \begin{matrix} {x}_{1} \\ {x}_{2} \\ {x}_{3} \\ {x}_{4} \end{matrix}\right) = \left( \begin{matrix} \frac{1}{5} + \frac{7}{5}t \\ \frac{9}{10} - \frac{11}{5}t \\ t \\ - \frac{1}{2} \end{matrix}\right) : t \in \mathbb{R}. \]
Yes
One of the most important examples of an orthogonal matrix is the so called Householder matrix. You have \( \mathbf{v} \) a unit vector and you form the matrix\n\n\[ I - 2\mathbf{v}{\mathbf{v}}^{T} \]\n\nThis is an orthogonal matrix which is also symmetric.
To see this, you use the rules of matrix operations.\n\n\[ {\left( I - 2\mathbf{v}{\mathbf{v}}^{T}\right) }^{T} = {I}^{T} - {\left( 2\mathbf{v}{\mathbf{v}}^{T}\right) }^{T} \]\n\n\[ = I - 2\mathbf{v}{\mathbf{v}}^{T} \] so it is symmetric. Now to show it is orthogonal,\n\n\[ \left( {I - 2{\mathbf{{vv}}}^{T}}\right) \left( {I - 2{\mathbf{{vv}}}^{T}}\right) = I - 2{\mathbf{{vv}}}^{T} - 2{\mathbf{{vv}}}^{T} + 4{\mathbf{{vv}}}^{T}{\mathbf{{vv}}}^{T} \]\n\n\[ = I - 4\mathbf{v}{\mathbf{v}}^{T} + 4\mathbf{v}{\mathbf{v}}^{T} = I \]\n\nbecause \( {\mathbf{v}}^{T}\mathbf{v} = \mathbf{v} \cdot \mathbf{v} = {\left| \mathbf{v}\right| }^{2} = 1 \) . Therefore, this is an example of an orthogonal matrix.
Yes
Given two vectors \( \mathbf{x},\mathbf{y} \) such that \( \left| \mathbf{x}\right| = \left| \mathbf{y}\right| \neq 0 \) but \( \mathbf{x} \neq \mathbf{y} \) and you want an orthogonal matrix \( Q \) such that \( Q\mathbf{x} = \mathbf{y} \) and \( Q\mathbf{y} = \mathbf{x} \).
The thing which works is the Householder matrix\n\[ Q \equiv I - 2\frac{\mathbf{x} - \mathbf{y}}{{\left| \mathbf{x} - \mathbf{y}\right| }^{2}}{\left( \mathbf{x} - \mathbf{y}\right) }^{T} \]\n\nHere is why this works.\n\n\[ Q\left( {\mathbf{x} - \mathbf{y}}\right) = \left( {\mathbf{x} - \mathbf{y}}\right) - 2\frac{\mathbf{x} - \mathbf{y}}{{\left| \mathbf{x} - \mathbf{y}\right| }^{2}}{\left( \mathbf{x} - \mathbf{y}\right) }^{T}\left( {\mathbf{x} - \mathbf{y}}\right) \]\n\n\[ = \left( {\mathbf{x} - \mathbf{y}}\right) - 2\frac{\mathbf{x} - \mathbf{y}}{{\left| \mathbf{x} - \mathbf{y}\right| }^{2}}{\left| \mathbf{x} - \mathbf{y}\right| }^{2} = \mathbf{y} - \mathbf{x} \]\n\n\[ Q\left( {\mathbf{x} + \mathbf{y}}\right) = \left( {\mathbf{x} + \mathbf{y}}\right) - 2\frac{\mathbf{x} - \mathbf{y}}{{\left| \mathbf{x} - \mathbf{y}\right| }^{2}}{\left( \mathbf{x} - \mathbf{y}\right) }^{T}\left( {\mathbf{x} + \mathbf{y}}\right) \]\n\n\[ = \left( {\mathbf{x} + \mathbf{y}}\right) - 2\frac{\mathbf{x} - \mathbf{y}}{{\left| \mathbf{x} - \mathbf{y}\right| }^{2}}\left( {\left( {\mathbf{x} - \mathbf{y}}\right) \cdot \left( {\mathbf{x} + \mathbf{y}}\right) }\right) \]\n\n\[ = \left( {\mathbf{x} + \mathbf{y}}\right) - 2\frac{\mathbf{x} - \mathbf{y}}{{\left| \mathbf{x} - \mathbf{y}\right| }^{2}}\left( {{\left| \mathbf{x}\right| }^{2} - {\left| \mathbf{y}\right| }^{2}}\right) = \mathbf{x} + \mathbf{y} \]\n\nHence\n\n\[ Q\mathbf{x} + Q\mathbf{y} = \mathbf{x} + \mathbf{y} \]\n\n\[ Q\mathbf{x} - Q\mathbf{y} = \mathbf{y} - \mathbf{x} \]\n\nAdding these equations, \( {2Q}\mathbf{x} = 2\mathbf{y} \) and subtracting them yields \( {2Q}\mathbf{y} = 2\mathbf{x} \).
Yes
Consider \( z = {x}_{1} - {x}_{2} \) subject to the constraints, \( {x}_{1} + 2{x}_{2} \leq {10},{x}_{1} + 2{x}_{2} \geq 2 \) , and \( 2{x}_{1} + {x}_{2} \leq 6,{x}_{i} \geq 0 \) . Find a simplex tableau for a problem of the form \( \mathbf{x} \geq \mathbf{0}, A\mathbf{x} = \mathbf{b} \) which is equivalent to the above problem.
You add in slack variables. These are positive variables, one for each of the first three constraints, which change the first three inequalities into equations. Thus the first three inequalities become \( {x}_{1} + 2{x}_{2} + {x}_{3} = {10},{x}_{1} + 2{x}_{2} - {x}_{4} = 2 \), and \( 2{x}_{1} + {x}_{2} + {x}_{5} = 6,{x}_{1},{x}_{2},{x}_{3},{x}_{4},{x}_{5} \geq 0 \) . Now it is necessary to find a basic feasible solution. You mainly need to find a positive solution to the equations,\n\n\[ {x}_{1} + 2{x}_{2} + {x}_{3} = {10} \]\n\n\[ {x}_{1} + 2{x}_{2} - {x}_{4} = 2 \]\n\n\[ 2{x}_{1} + {x}_{2} + {x}_{5} = 6 \]\n\nthe solution set for the above system is given by\n\n\[ {x}_{2} = \frac{2}{3}{x}_{4} - \frac{2}{3} + \frac{1}{3}{x}_{5},{x}_{1} = - \frac{1}{3}{x}_{4} + \frac{10}{3} - \frac{2}{3}{x}_{5},{x}_{3} = - {x}_{4} + 8. \]\n\nAn easy way to get a basic feasible solution is to let \( {x}_{4} = 8 \) and \( {x}_{5} = 1 \) . Then a feasible solution is\n\n\[ \left( {{x}_{1},{x}_{2},{x}_{3},{x}_{4},{x}_{5}}\right) = \left( {0,5,0,8,1}\right) . \]\n\nIt follows \( {z}^{0} = - 5 \) and the matrix (6.2), \( \left( \begin{matrix} A & \mathbf{0} & \mathbf{b} \\ - \mathbf{c} & 1 & 0 \end{matrix}\right) \) with the variables kept track of\non the bottom is\n\n\[ \left( \begin{matrix} 1 & 2 & 1 & 0 & 0 & 0 & {10} \\ 1 & 2 & 0 & - 1 & 0 & 0 & 2 \\ 2 & 1 & 0 & 0 & 1 & 0 & 6 \\ - 1 & 1 & 0 & 0 & 0 & 1 & 0 \\ {x}_{1} & {x}_{2} & {x}_{3} & {x}_{4} & {x}_{5} & 0 & 0 \end{matrix}\right) \]\n\nand the first thing to do is to permute the columns so that the list of variables on the bottom will have \( {x}_{1} \) and \( {x}_{3} \) at the end.\n\n\[ \left( \begin{matrix} 2 & 0 & 0 & 1 & 1 & 0 & {10} \\ 2 & - 1 & 0 & 1 & 0 & 0 & 2 \\ 1 & 0 & 1 & 2 & 0 & 0 & 6 \\ 1 & 0 & 0 & - 1 & 0 & 1 & 0 \\ {x}_{2} & {x}_{4} & {x}_{5} & {x}_{1} & {x}_{3} & 0 & 0 \end{matrix}\right) \]\n\nNext, as described above, take the row reduced echelon form of the top three lines of the above matrix. This yields\n\n\[ \left( \begin{matrix} 1 & 0 & 0 & \frac{1}{2} & \frac{1}{2} & 0 & 5 \\ 0 & 1 & 0 & 0 & 1 & 0 & 8 \\ 0 & 0 & 1 & \frac{3}{2} & - \frac{1}{2} & 0 & 1 \end{matrix}\right) . \]\n\nNow do row operations to\n\n\[ \left( \begin{matrix} 1 & 0 & 0 & \frac{1}{2} & \frac{1}{2} & 0 & 5 \\ 0 & 1 & 0 & 0 & 1 & 0 & 8 \\ 0 & 0 & 1 & \frac{3}{2} & - \frac{1}{2} & 0 & 1 \\ 1 & 0 & 0 & - 1 & 0 & 1 & 0 \end{matrix}\right) \]\n\nto finally obtain\n\n\[ \left( \begin{matrix} 1 & 0 & 0 & \frac{1}{2} & \frac{1}{2} & 0 & 5 \\ 0 & 1 & 0 & 0 & 1 & 0 & 8 \\ 0 & 0 & 1 & \frac{3}{2} & - \frac{1}{2} & 0 & 1 \\ 0 & 0 & 0 & - \frac{3}{2} & - \frac{1}{2} & 1 & - 5 \end{matrix}\right) \]\n\nand this is a simplex tableau. The variables are \( {x}_{2},{x}_{4},{x}_{5},{x}_{1},{x}_{3}, z \) .
Yes
Example 6.2.3 Consider \( z = {x}_{1} - {x}_{2} \) subject to the constraints, \( {x}_{1} + 2{x}_{2} \leq {10},{x}_{1} + 2{x}_{2} \geq 2 \) , and \( 2{x}_{1} + {x}_{2} \leq 6,{x}_{i} \geq 0 \) . Find a simplex tableau.
Adding in slack variables, an augmented matrix which is descriptive of the constraints\n\nis\n\n\[ \left( \begin{matrix} 1 & 2 & 1 & 0 & 0 & {10} \\ 1 & 2 & 0 & - 1 & 0 & 6 \\ 2 & 1 & 0 & 0 & 1 & 6 \end{matrix}\right) \]\n\nThe obvious solution is not feasible because of that -1 in the fourth column. When you let \( {x}_{1},{x}_{2} = 0 \), you end up having \( {x}_{4} = - 6 \) which is negative. Consider the second column and select the 2 as a pivot to zero out that which is above and below the 2 .\n\n\[ \left( \begin{matrix} 0 & 0 & 1 & 1 & 0 & 4 \\ \frac{1}{2} & 1 & 0 & - \frac{1}{2} & 0 & 3 \\ \frac{3}{2} & 0 & 0 & \frac{1}{2} & 1 & 3 \end{matrix}\right) \]\n\nThis one is good. When you let \( {x}_{1} = {x}_{4} = 0 \), you find that \( {x}_{2} = 3,{x}_{3} = 4,{x}_{5} = 3 \) . The obvious solution is now feasible. You can now assemble the simplex tableau. The first step is to include a column and row for \( z \) . This yields\n\n\[ \left( \begin{matrix} 0 & 0 & 1 & 1 & 0 & 0 & 4 \\ \frac{1}{2} & 1 & 0 & - \frac{1}{2} & 0 & 0 & 3 \\ \frac{3}{2} & 0 & 0 & \frac{1}{2} & 1 & 0 & 3 \\ - 1 & 0 & 1 & 0 & 0 & 1 & 0 \end{matrix}\right) \]\n\nNow you need to get zeros in the right places so the simple columns will be preserved as simple columns in this larger matrix. This means you need to zero out the 1 in the third column on the bottom. A simplex tableau is now\n\n\[ \left( \begin{matrix} 0 & 0 & 1 & 1 & 0 & 0 & 4 \\ \frac{1}{2} & 1 & 0 & - \frac{1}{2} & 0 & 0 & 3 \\ \frac{3}{2} & 0 & 0 & \frac{1}{2} & 1 & 0 & 3 \\ - 1 & 0 & 0 & - 1 & 0 & 1 & - 4 \end{matrix}\right) . \]\n\nNote it is not the same one obtained earlier. There is no reason a simplex tableau should be unique. In fact, it follows from the above general description that you have one for each basic feasible point of the region determined by the constraints.
Yes
Maximize \( z = {x}_{1} - {x}_{2} \) subject to the constraints,\n\n\[ \n{x}_{1} + 2{x}_{2} \leq {10},{x}_{1} + 2{x}_{2} \geq 2 \n\]\n\nand \( 2{x}_{1} + {x}_{2} \leq 6,{x}_{i} \geq 0 \) .
Recall this is the same as maximizing \( z = {x}_{1} - {x}_{2} \) subject to\n\n\[ \n\left( \begin{matrix} 1 & 2 & 1 & 0 & 0 \\ 1 & 2 & 0 & - 1 & 0 \\ 2 & 1 & 0 & 0 & 1 \end{matrix}\right) \left( \begin{array}{l} {x}_{1} \\ {x}_{2} \\ {x}_{3} \\ {x}_{4} \\ {x}_{5} \end{array}\right) = \left( \begin{array}{l} {10} \\ 2 \\ 6 \end{array}\right) ,\mathbf{x} \geq \mathbf{0}, \n\]\n\nthe variables, \( {x}_{3},{x}_{4},{x}_{5} \) being slack variables. Recall the simplex tableau was\n\n\[ \n\left( \begin{matrix} 1 & 0 & 0 & \frac{1}{2} & \frac{1}{2} & 0 & 5 \\ 0 & 1 & 0 & 0 & 1 & 0 & 8 \\ 0 & 0 & 1 & \frac{3}{2} & - \frac{1}{2} & 0 & 1 \\ 0 & 0 & 0 & - \frac{3}{2} & - \frac{1}{2} & 1 & - 5 \end{matrix}\right) \n\]\n\nwith the variables ordered as \( {x}_{2},{x}_{4},{x}_{5},{x}_{1},{x}_{3} \) and so \( {\mathbf{x}}_{B} = \left( {{x}_{2},{x}_{4},{x}_{5}}\right) \) and\n\n\[ \n{\mathbf{x}}_{F} = \left( {{x}_{1},{x}_{3}}\right) . \n\]\n\nApply the simplex algorithm to the fourth column because \( - \frac{3}{2} < 0 \) and this is the most negative entry in the bottom row. The pivot is \( 3/2 \) because \( 1/\left( {3/2}\right) = 2/3 < 5/\left( {1/2}\right) \) . Dividing this row by \( 3/2 \) and then using this to zero out the other elements in that column, the new simplex tableau is\n\n\[ \n\left( \begin{matrix} 1 & 0 & - \frac{1}{3} & 0 & \frac{2}{3} & 0 & \frac{14}{3} \\ 0 & 1 & 0 & 0 & 1 & 0 & 8 \\ 0 & 0 & \frac{2}{3} & 1 & - \frac{1}{3} & 0 & \frac{2}{3} \\ 0 & 0 & 1 & 0 & - 1 & 1 & - 4 \end{matrix}\right) \n\]\n\nNow there is still a negative number in the bottom left row. Therefore, the process should be continued. This time the pivot is the \( 2/3 \) in the top of the column. Dividing the top row\n\nby \( 2/3 \) and then using this to zero out the entries below it,\n\n\[ \n\left( \begin{matrix} \frac{3}{2} & 0 & - \frac{1}{2} & 0 & 1 & 0 & 7 \\ - \frac{3}{2} & 1 & \frac{1}{2} & 0 & 0 & 0 & 1 \\ \frac{1}{2} & 0 & \frac{1}{2} & 1 & 0 & 0 & 3 \\ \frac{3}{2} & 0 & \frac{1}{2} & 0 & 0 & 1 & 3 \end{matrix}\right) \n\]\n\nNow all the numbers on the bottom left row are nonnegative so the process stops. Now recall the variables and columns were ordered as \( {x}_{2},{x}_{4},{x}_{5},{x}_{1},{x}_{3} \) . The solution in terms of \( {x}_{1} \) and \( {x}_{2} \) is \( {x}_{2} = 0 \) and \( {x}_{1} = 3 \) and \( z = 3 \) . Note that in the above, I did not worry about permuting the columns to keep those which go with the basic variables on the left.
Yes
How many pounds of each feed per pig should the pig farmer use in order to minimize his cost?
His problem is to minimize \( C \equiv 2{x}_{1} + 3{x}_{2} + 2{x}_{3} + 3{x}_{4} \) subject to the constraints\n\n\[ \n{x}_{1} + 2{x}_{2} + {x}_{3} + 3{x}_{4} \geq 5 \]\n\n\[ \n5{x}_{1} + 3{x}_{2} + 2{x}_{3} + {x}_{4} \geq 8 \]\n\n\[ \n{x}_{1} + 2{x}_{2} + 2{x}_{3} + {x}_{4} \geq 6 \]\n\n\[ \n2{x}_{1} + {x}_{2} + {x}_{3} + {x}_{4} \geq 7 \]\n\n\[ \n{x}_{1} + {x}_{2} + {x}_{3} + {x}_{4} \geq 4 \]\n\nwhere each \( {x}_{i} \geq 0 \) . Add in the slack variables,\n\n\[ \n{x}_{1} + 2{x}_{2} + {x}_{3} + 3{x}_{4} - {x}_{5} = 5 \]\n\n\[ \n5{x}_{1} + 3{x}_{2} + 2{x}_{3} + {x}_{4} - {x}_{6} = 8 \]\n\n\[ \n{x}_{1} + 2{x}_{2} + 2{x}_{3} + {x}_{4} - {x}_{7} = 6 \]\n\n\[ \n2{x}_{1} + {x}_{2} + {x}_{3} + {x}_{4} - {x}_{8} = 7 \]\n\n\[ \n{x}_{1} + {x}_{2} + {x}_{3} + {x}_{4} - {x}_{9} = 4 \]\nThe augmented matrix for this system is\n\n\[ \n\left( \begin{matrix} 1 & 2 & 1 & 3 & - 1 & 0 & 0 & 0 & 0 & 5 \\ 5 & 3 & 2 & 1 & 0 & - 1 & 0 & 0 & 0 & 8 \\ 1 & 2 & 2 & 1 & 0 & 0 & - 1 & 0 & 0 & 6 \\ 2 & 1 & 1 & 1 & 0 & 0 & 0 & - 1 & 0 & 7 \\ 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & - 1 & 4 \end{matrix}\right) \]\n\nHow in the world can you find a basic feasible solution? Remember the simplex algorithm is designed to keep the entries in the right column nonnegative so you use this algorithm a few times till the obvious solution is a basic feasible solution.\n\nConsider the first column. The pivot is the 5. Using the row operations described in the algorithm, you get\n\n\[ \n\left( \begin{matrix} 0 & \frac{7}{5} & \frac{3}{5} & \frac{14}{5} & - 1 & \frac{1}{5} & 0 & 0 & 0 & \frac{17}{5} \\ 1 & \frac{3}{5} & \frac{2}{5} & \frac{1}{5} & 0 & - \frac{1}{5} & 0 & 0 & 0 & \frac{8}{5} \\ 0 & \frac{7}{5} & \frac{8}{5} & \frac{4}{5} & 0 & \frac{1}{5} & - 1 & 0 & 0 & \frac{22}{5} \\ 0 & - \frac{1}{5} & \frac{1}{5} & \frac{3}{5} & 0 & \frac{2}{5} & 0 & - 1 & 0 & \frac{19}{5} \\ 0 & \frac{2}{5} & \frac{3}{5} & \frac{4}{5} & 0 & \frac{1}{5} & 0 & 0 & - 1 & \frac{12}{5} \end{matrix}\right) \]\n\nNow go to the second column. The pivot in this column is the \( 7/5 \) . This is in a different row than the pivot in the first column so I will use it to zero out everything below it. This will get rid of the zeros in the fifth column and introduce zeros in the second. This yields\n\n\[ \n\left( \begin{matrix} 0 & 1 & \frac{3}{7} & 2 & - \frac{5}{7} & \frac{1}{7} & 0 & 0 & 0 & \frac{17}{7} \\ 1 & 0 & \frac{1}{7} & - 1 & \frac{3}{7} & - \frac{2}{7} & 0 & 0 & 0 & \frac{1}{7} \\ 0 & 0 & 1 & - 2 & 1 & 0 & - 1 & 0 & 0 & 1 \\ 0 & 0 & \frac{2}{7} & 1 & - \frac{1}{7} & \frac{3}{7} & 0 & - 1 & 0 & \frac{30}{7} \\ 0 & 0 & \frac{3}{7} & 0 & \frac{2}{7} & \frac{1}{7} & 0 & 0 & - 1 & \frac{10}{7} \end{matrix}\right) \]\n\nNow consider another column, this time the fourth. I will pick this one because it has some negat
Yes
Maximize \( z = {x}_{1} - 3{x}_{2} + {x}_{3} \) subject to the constraints \( {x}_{1} + {x}_{2} + {x}_{3} \leq \) \( {10},{x}_{1} + {x}_{2} + {x}_{3} \geq 2,{x}_{1} + {x}_{2} + 3{x}_{3} \leq 8 \) and \( {x}_{1} + 2{x}_{2} + {x}_{3} \leq 7 \) with all variables nonnegative.
The first part of it is the same. You wind up with the same simplex tableau,\n\n\[ \left( \begin{matrix} 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 8 \\ 1 & 1 & 1 & 0 & - 1 & 0 & 0 & 0 & 2 \\ - 2 & - 2 & 0 & 0 & 3 & 1 & 0 & 0 & 2 \\ 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 5 \\ 0 & 4 & 0 & 0 & - 1 & 0 & 0 & 1 & 2 \end{matrix}\right) \]\n\nbut this time, you apply the algorithm to get rid of the negative entries in the left bottom row. There is a -1 . Use this column. The pivot is the 3 . The next tableau is\n\n\[ \left( \begin{matrix} \frac{2}{3} & \frac{2}{3} & 0 & 1 & 0 & - \frac{1}{3} & 0 & 0 & \frac{22}{3} \\ \frac{1}{3} & \frac{1}{3} & 1 & 0 & 0 & \frac{1}{3} & 0 & 0 & \frac{8}{3} \\ - \frac{2}{3} & - \frac{2}{3} & 0 & 0 & 1 & \frac{1}{3} & 0 & 0 & \frac{2}{3} \\ \frac{2}{3} & \frac{5}{3} & 0 & 0 & 0 & - \frac{1}{3} & 1 & 0 & \frac{13}{3} \\ - \frac{2}{3} & \frac{10}{3} & 0 & 0 & 0 & \frac{1}{3} & 0 & 1 & \frac{8}{3} \end{matrix}\right) \]\n\nThere is still a negative entry, the \( - 2/3 \) . This will be the new pivot column. The pivot is the \( 2/3 \) on the fourth row. This yields\n\n\[ \left( \begin{matrix} 0 & - 1 & 0 & 1 & 0 & 0 & - 1 & 0 & 3 \\ 0 & - \frac{1}{2} & 1 & 0 & 0 & \frac{1}{2} & - \frac{1}{2} & 0 & \frac{1}{2} \\ 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 5 \\ 1 & \frac{5}{2} & 0 & 0 & 0 & - \frac{1}{2} & \frac{3}{2} & 0 & \frac{13}{2} \\ 0 & 5 & 0 & 0 & 0 & 0 & 1 & 1 & 7 \end{matrix}\right) \]\n\nand the process stops. The maximum for \( z \) is 7 and it occurs when \( {x}_{1} = {13}/2,{x}_{2} = 0,{x}_{3} = \) \( 1/2 \) .
Yes
Maximize \( {x}_{1} - {x}_{2} + 2{x}_{3} \) subject to the constraints, \( 2{x}_{1} + {x}_{2} - {x}_{3} \geq 3,{x}_{1} + {x}_{2} + {x}_{3} \geq 2,{x}_{1} + {x}_{2} + {x}_{3} \leq 7 \) and \( \mathbf{x} \geq \mathbf{0} \) .
From 6.20 you can immediately assemble an initial simplex tableau. You begin with the first 6 columns and top 3 rows in 6.20 . Then add in the column and row for \( z \) . This yields\n\n\[ \left( \begin{matrix} 1 & \frac{2}{3} & 0 & - \frac{1}{3} & - \frac{1}{3} & 0 & 0 & \frac{5}{3} \\ 0 & \frac{1}{3} & 1 & \frac{1}{3} & - \frac{2}{3} & 0 & 0 & \frac{1}{3} \\ 0 & 0 & 0 & 0 & 1 & 1 & 0 & 5 \\ - 1 & 1 & - 2 & 0 & 0 & 0 & 1 & 0 \end{matrix}\right) \]\n\nand you first do row operations to make the first and third columns simple columns. Thus the next simplex tableau is\n\n\[ \left( \begin{matrix} 1 & \frac{2}{3} & 0 & - \frac{1}{3} & - \frac{1}{3} & 0 & 0 & \frac{5}{3} \\ 0 & \frac{1}{3} & 1 & \frac{1}{3} & - \frac{2}{3} & 0 & 0 & \frac{1}{3} \\ 0 & 0 & 0 & 0 & 1 & 1 & 0 & 5 \\ 0 & \frac{7}{3} & 0 & \frac{1}{3} & - \frac{5}{3} & 0 & 1 & \frac{7}{3} \end{matrix}\right) \]\n\nYou are trying to get rid of negative entries in the bottom left row. There is only one, the \( - 5/3 \) . The pivot is the 1 . The next simplex tableau is then\n\n\[ \left( \begin{matrix} 1 & \frac{2}{3} & 0 & - \frac{1}{3} & 0 & \frac{1}{3} & 0 & \frac{10}{3} \\ 0 & \frac{1}{3} & 1 & \frac{1}{3} & 0 & \frac{2}{3} & 0 & \frac{11}{3} \\ 0 & 0 & 0 & 0 & 1 & 1 & 0 & 5 \\ 0 & \frac{7}{3} & 0 & \frac{1}{3} & 0 & \frac{5}{3} & 1 & \frac{32}{3} \end{matrix}\right) \]\n\nand so the maximum value of \( z \) is \( {32}/3 \) and it occurs when \( {x}_{1} = {10}/3,{x}_{2} = 0 \) and \( {x}_{3} = {11}/3 \) .
Yes
Lemma 6.5.1 Let \( \mathbf{x} \) be a solution of the inequalities of \( A \) .) and let \( \mathbf{y} \) be a solution of the inequalities of \( B \) .). Then\n\n\[ \mathbf{{cx}} \geq \mathbf{{yb}}\text{.} \]\n\nand if equality holds in the above, then \( \mathbf{x} \) is the solution to \( A \) .) and \( \mathbf{y} \) is a solution to \( B \) .).
Proof: This follows immediately. Since \( \mathbf{c} \geq \mathbf{y}A,\mathbf{{cx}} \geq \mathbf{y}A\mathbf{x} \geq \mathbf{{yb}} \) .\n\nIt follows from this lemma that if \( \mathbf{y} \) satisfies the inequalities of \( B \) .) and \( \mathbf{x} \) satisfies the inequalities of \( A \) .) then if equality holds in the above lemma, it must be that \( \mathbf{x} \) is a solution of \( A \) .) and \( \mathbf{y} \) is a solution of \( B \) .). \( \blacksquare \)
Yes
Theorem 6.5.2 Suppose there exists a solution \( \mathbf{x} \) to A.) where \( \mathbf{x} \) is a basic feasible solution of the inequalities of \( A \) .). Then there exists a solution \( \mathbf{y} \) to \( B \) .) and \( \mathbf{{cx}} = \mathbf{{by}} \) . It is also possible to find \( \mathbf{y} \) from \( \mathbf{x} \) using a simple formula.
Proof: Since the solution to \( A \) .) is basic and feasible, there exists a simplex tableau like 6.23 such that \( {\mathbf{x}}^{\prime } \) can be split into \( {\mathbf{x}}_{B} \) and \( {\mathbf{x}}_{F} \) such that \( {\mathbf{x}}_{F} = 0 \) and \( {\mathbf{x}}_{B} = {B}^{-1}\mathbf{b} \) . Now since it is a minimizer, it follows \( {\mathbf{c}}_{B}{B}^{-1}F - {\mathbf{c}}_{F} \leq \mathbf{0} \) and the minimum value for \( \mathbf{{cx}} \) is \( {\mathbf{c}}_{B}{B}^{-1}\mathbf{b} \) . Stating this again, \( \mathbf{{cx}} = {\mathbf{c}}_{B}{B}^{-1}\mathbf{b} \) . Is it possible you can take \( \mathbf{y} = {\mathbf{c}}_{B}{B}^{-1} \) ? From Lemma 6.5.1 this will be so if \( {\mathbf{c}}_{B}{B}^{-1} \) solves the constraints of problem \( B \) .). Is \( {\mathbf{c}}_{B}{B}^{-1} \geq 0 \) ? Is \( {\mathbf{c}}_{B}{B}^{-1}A \leq \mathbf{c} \) ? These two conditions are satisfied if and only if \( {\mathbf{c}}_{B}{B}^{-1}\left( \begin{array}{ll} A & - I \end{array}\right) \leq \) \( \left( \begin{array}{ll} \mathbf{c} & \mathbf{0} \end{array}\right) \) . Referring to the process of permuting the columns of the first augmented matrix of (6.21) to get (6.22) and doing the same permutations on the columns of \( \left( \begin{array}{ll} A & - I \end{array}\right) \) and \( \left( \begin{array}{ll} \mathbf{c} & \mathbf{0} \end{array}\right) \), the desired inequality holds if and only if \( {\mathbf{c}}_{B}{B}^{-1}\left( \begin{array}{ll} B & F \end{array}\right) \leq \left( \begin{array}{ll} {\mathbf{c}}_{B} & {\mathbf{c}}_{F} \end{array}\right) \) which is equivalent to saying \( \left( \begin{array}{ll} {\mathbf{c}}_{B} & {\mathbf{c}}_{B}{B}^{-1}F \end{array}\right) \leq \left( \begin{array}{ll} {\mathbf{c}}_{B} & {\mathbf{c}}_{F} \end{array}\right) \) and this is true because \( {\mathbf{c}}_{B}{B}^{-1}F - {\mathbf{c}}_{F} \leq \mathbf{0} \) due to the assumption that \( \mathbf{x} \) is a minimizer. The simple formula is just \( \mathbf{y} = {\mathbf{c}}_{B}{B}^{-1} \) . \( \blacksquare \)
Yes
Corollary 6.5.3 Suppose there exists a solution, \( \mathbf{y} \) to \( B \) .) where \( \mathbf{y} \) is a basic feasible solution of the inequalities of \( B \) .). Then there exists a solution, \( \mathbf{x} \) to \( A \) .) and \( \mathbf{{cx}} = \mathbf{{by}} \) . It is also possible to find \( \mathbf{x} \) from \( \mathbf{y} \) using a simple formula.
In this case, and referring to (6.23), the simple formula is \( \mathbf{x} = {B}_{1}^{-T}{\mathbf{b}}_{{B}_{1}} \) .
Yes
Example 7.1.4 Let\n\n\[ A = \left( \begin{matrix} 2 & 2 & - 2 \\ 1 & 3 & - 1 \\ - 1 & 1 & 1 \end{matrix}\right) \]\n\nFirst find the eigenvalues.
\[ \det \left( {\lambda \left( \begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{matrix}\right) - \left( \begin{matrix} 2 & 2 & - 2 \\ 1 & 3 & - 1 \\ - 1 & 1 & 1 \end{matrix}\right) }\right) = 0 \]\n\nThis is \( {\lambda }^{3} - 6{\lambda }^{2} + {8\lambda } = 0 \) and the solutions are 0,2, and 4 .\n\n0 Can be an Eigenvalue!\n\nNow find the eigenvectors. For \( \lambda = 0 \) the augmented matrix for finding the solutions is\n\n\[ \left( \begin{matrix} 2 & 2 & - 2 & 0 \\ 1 & 3 & - 1 & 0 \\ - 1 & 1 & 1 & 0 \end{matrix}\right) \]\n\nand the row reduced echelon form is\n\n\[ \left( \begin{matrix} 1 & 0 & - 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{matrix}\right) \]\n\nTherefore, the eigenvectors are of the form\n\n\[ z\left( \begin{array}{l} 1 \\ 0 \\ 1 \end{array}\right) \]\n\nwhere \( z \neq 0 \) .\n\nNext find the eigenvectors for \( \lambda = 2 \) . The augmented matrix for the system of equations needed to find these eigenvectors is\n\n\[ \left( \begin{matrix} 0 & - 2 & 2 & 0 \\ - 1 & - 1 & 1 & 0 \\ 1 & - 1 & 1 & 0 \end{matrix}\right) \]\n\nand the row reduced echelon form is\n\n\[ \left( \begin{matrix} 1 & 0 & 0 & 0 \\ 0 & 1 & - 1 & 0 \\ 0 & 0 & 0 & 0 \end{matrix}\right) \]\n\nand so the eigenvectors are of the form\n\n\[ z\left( \begin{array}{l} 0 \\ 1 \\ 1 \end{array}\right) \]\n\nwhere \( z \neq 0 \) .\n\nFinally find the eigenvectors for \( \lambda = 4 \) . The augmented matrix for the system of equations needed to find these eigenvectors is\n\n\[ \left( \begin{matrix} 2 & - 2 & 2 & 0 \\ - 1 & 1 & 1 & 0 \\ 1 & - 1 & 3 & 0 \end{matrix}\right) \]\n\nand the row reduced echelon form is\n\n\[ \left( \begin{matrix} 1 & - 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{matrix}\right) \]\n\nTherefore, the eigenvectors are of the form\n\n\[ y\left( \begin{array}{l} 1 \\ 1 \\ 0 \end{array}\right) \]\n\nwhere \( y \neq 0 \) .
Yes
Example 7.1.5 Let\n\n\[ A = \left( \begin{matrix} 2 & - 2 & - 1 \\ - 2 & - 1 & - 2 \\ {14} & {25} & {14} \end{matrix}\right) \]\n\nFind the eigenvectors and eigenvalues.
In this case the eigenvalues are \( 3,6,6 \) where I have listed 6 twice because it is a zero of algebraic multiplicity two, the characteristic equation being\n\n\[ \left( {\lambda - 3}\right) {\left( \lambda - 6\right) }^{2} = 0.\]\n\nIt remains to find the eigenvectors for these eigenvalues. First consider the eigenvectors for \( \lambda = 3 \) . You must solve\n\n\[ \left( {3\left( \begin{array}{lll} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right) - \left( \begin{matrix} 2 & - 2 & - 1 \\ - 2 & - 1 & - 2 \\ {14} & {25} & {14} \end{matrix}\right) }\right) \left( \begin{array}{l} x \\ y \\ z \end{array}\right) = \left( \begin{array}{l} 0 \\ 0 \\ 0 \end{array}\right) . \]\n\nUsing routine row operations, the eigenvectors are nonzero vectors of the form\n\n\[ \left( \begin{matrix} z \\ - z \\ z \end{matrix}\right) = z\left( \begin{matrix} 1 \\ - 1 \\ 1 \end{matrix}\right) \]\n\nNext consider the eigenvectors for \( \lambda = 6 \) . This requires you to solve\n\n\[ \left( {6\left( \begin{array}{lll} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right) - \left( \begin{matrix} 2 & - 2 & - 1 \\ - 2 & - 1 & - 2 \\ {14} & {25} & {14} \end{matrix}\right) }\right) \left( \begin{array}{l} x \\ y \\ z \end{array}\right) = \left( \begin{array}{l} 0 \\ 0 \\ 0 \end{array}\right) \]\n\nand using the usual procedures yields the eigenvectors for \( \lambda = 6 \) are of the form\n\n\[ z\left( \begin{matrix} - \frac{1}{8} \\ - \frac{1}{4} \\ 1 \end{matrix}\right) \]\n\nor written more simply,\n\n\[ z\left( \begin{matrix} - 1 \\ - 2 \\ 8 \end{matrix}\right) \]\n\nwhere \( z \in \mathbb{F} \) .
Yes
Theorem 7.1.7 Suppose \( M{\mathbf{v}}_{i} = {\lambda }_{i}{\mathbf{v}}_{i}, i = 1,\cdots, r,{\mathbf{v}}_{i} \neq 0 \), and that if \( i \neq j \), then \( {\lambda }_{i} \neq {\lambda }_{j} \) . Then the set of eigenvectors, \( \left\{ {{\mathbf{v}}_{1},\cdots ,{\mathbf{v}}_{r}}\right\} \) is linearly independent.
Proof. Suppose the claim of the lemma is not true. Then there exists a subset of this set of vectors\n\n\[ \left\{ {{\mathbf{w}}_{1},\cdots ,{\mathbf{w}}_{r}}\right\} \subseteq \left\{ {{\mathbf{v}}_{1},\cdots ,{\mathbf{v}}_{k}}\right\} \]\n\nsuch that\n\n\[ \mathop{\sum }\limits_{{j = 1}}^{r}{c}_{j}{\mathbf{w}}_{j} = \mathbf{0} \]\n\n(7.5)\n\nwhere each \( {c}_{j} \neq 0 \) . Say \( M{\mathbf{w}}_{j} = {\mu }_{j}{\mathbf{w}}_{j} \) where\n\n\[ \left\{ {{\mu }_{1},\cdots ,{\mu }_{r}}\right\} \subseteq \left\{ {{\lambda }_{1},\cdots ,{\lambda }_{k}}\right\} \]\n\nthe \( {\mu }_{j} \) being distinct eigenvalues of \( M \) . Out of all such subsets, let this one be such that \( r \) is as small as possible. Then necessarily, \( r > 1 \) because otherwise, \( {c}_{1}{\mathbf{w}}_{1} = \mathbf{0} \) which would imply \( {\mathbf{w}}_{1} = \mathbf{0} \), which is not allowed for eigenvectors.\n\nNow apply \( M \) to both sides of (7.5).\n\n\[ \mathop{\sum }\limits_{{j = 1}}^{r}{c}_{j}{\mu }_{j}{\mathbf{w}}_{j} = \mathbf{0} \]\n\n(7.6)\n\nNext pick \( {\mu }_{k} \neq 0 \) and multiply both sides of (7.5) by \( {\mu }_{k} \) . Such a \( {\mu }_{k} \) exists because \( r > 1 \) . Thus\n\n\[ \mathop{\sum }\limits_{{j = 1}}^{r}{c}_{j}{\mu }_{k}{\mathbf{w}}_{j} = \mathbf{0} \]\n\n(7.7)\n\nSubtract the sum in (7.7) from the sum in (7.6) to obtain\n\n\[ \mathop{\sum }\limits_{{j = 1}}^{r}{c}_{j}\left( {{\mu }_{k} - {\mu }_{j}}\right) {\mathbf{w}}_{j} = \mathbf{0} \]\n\nNow one of the constants \( {c}_{j}\left( {{\mu }_{k} - {\mu }_{j}}\right) \) equals 0, when \( j = k \) . Therefore, \( r \) was not as small as possible after all.
Yes
Find the eigenvalues and eigenvectors of the matrix\n\n\[ A = \left( \begin{matrix} 1 & 0 & 0 \\ 0 & 2 & - 1 \\ 0 & 1 & 2 \end{matrix}\right) \]
You need to find the eigenvalues. Solve\n\n\[ \det \left( {\lambda \left( \begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{matrix}\right) - \left( \begin{matrix} 1 & 0 & 0 \\ 0 & 2 & - 1 \\ 0 & 1 & 2 \end{matrix}\right) }\right) = 0. \]\n\nThis reduces to \( \left( {\lambda - 1}\right) \left( {{\lambda }^{2} - {4\lambda } + 5}\right) = 0 \) . The solutions are \( \lambda = 1,\lambda = 2 + i,\lambda = 2 - i \) .
No
Find the principle directions determined by the matrix\n\n\\[ \n\\left( \\begin{array}{lll} \\frac{29}{11} & \\frac{6}{11} & \\frac{6}{11} \\\\ \\frac{6}{11} & \\frac{41}{44} & \\frac{19}{44} \\\\ \\frac{6}{11} & \\frac{19}{44} & \\frac{41}{44} \\end{array}\\right) \n\\]
The eigenvalues are \\( 3,1 \\), and \\( \\frac{1}{2} \\) .\n\nIt is nice to be given the eigenvalues. The largest eigenvalue is 3 which means that in the direction determined by the eigenvector associated with 3 the stretch is three times as large. The smallest eigenvalue is \\( 1/2 \\) and so in the direction determined by the eigenvector for \\( 1/2 \\) the material is compressed, becoming locally half as long. It remains to find these directions. First consider the eigenvector for 3 . It is necessary to solve\n\n\\[ \n\\left( {3\\left( \\begin{array}{lll} 1 & 0 & 0 \\\\ 0 & 1 & 0 \\\\ 0 & 0 & 1 \\end{array}\\right) - \\left( \\begin{array}{lll} \\frac{29}{11} & \\frac{6}{11} & \\frac{6}{11} \\\\ \\frac{6}{11} & \\frac{41}{44} & \\frac{19}{44} \\\\ \\frac{6}{11} & \\frac{19}{44} & \\frac{11}{44} \\end{array}\\right) }\\right) \\left( \\begin{array}{l} x \\\\ y \\\\ z \\end{array}\\right) = \\left( \\begin{array}{l} 0 \\\\ 0 \\\\ 0 \\end{array}\\right) \n\\]\n\nThus the augmented matrix for this system of equations is\n\n\\[ \n\\left( \\begin{matrix} \\frac{4}{11} & - \\frac{6}{11} & - \\frac{6}{11} & 0 \\\\ - \\frac{6}{11} & \\frac{91}{44} & - \\frac{19}{44} & 0 \\\\ - \\frac{6}{11} & - \\frac{19}{44} & \\frac{91}{44} & 0 \\end{matrix}\\right) \n\\]\n\nThe row reduced echelon form is\n\n\\[ \n\\left( \\begin{matrix} 1 & 0 & - 3 & 0 \\\\ 0 & 1 & - 1 & 0 \\\\ 0 & 0 & 0 & 0 \\end{matrix}\\right) \n\\]\n\nand so the principle direction for the eigenvalue 3 in which the material is stretched to the\n\nmaximum extent is\n\\[ \n\\left( \\begin{array}{l} 3 \\\\ 1 \\\\ 1 \\end{array}\\right) \n\\]\n\nA direction vector in this direction is\n\n\\[ \n\\left( \\begin{array}{l} 3/\\sqrt{11} \\\\ 1/\\sqrt{11} \\\\ 1/\\sqrt{11} \\end{array}\\right) \n\\]
No
Example 7.2.2 Find oscillatory solutions to the system of differential equations, \( {\mathbf{x}}^{\prime \prime } = A\mathbf{x} \)\n\nwhere\n\[ A = \left( \begin{matrix} - \frac{5}{3} & - \frac{1}{3} & - \frac{1}{3} \\ - \frac{1}{3} & - \frac{13}{6} & \frac{5}{6} \\ - \frac{1}{3} & \frac{5}{6} & - \frac{13}{6} \end{matrix}\right) \]
The eigenvalues are \( - 1, - 2 \), and -3 .\n\nAccording to the above, you can find solutions by looking for the eigenvectors. Consider the eigenvectors for -3 . The augmented matrix for finding the eigenvectors is\n\n\[ \left( \begin{matrix} - \frac{4}{3} & \frac{1}{3} & \frac{1}{3} & 0 \\ \frac{1}{3} & - \frac{5}{6} & - \frac{5}{6} & 0 \\ \frac{1}{3} & - \frac{5}{6} & - \frac{5}{6} & 0 \end{matrix}\right) \]\n\nand its row echelon form is\n\[ \left( \begin{array}{llll} 1 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right) \]\n\nTherefore, the eigenvectors are of the form\n\n\[ \mathbf{v} = z\left( \begin{matrix} 0 \\ - 1 \\ 1 \end{matrix}\right) \]\n\nIt follows\n\[ \left( \begin{matrix} 0 \\ - 1 \\ 1 \end{matrix}\right) \cos \left( {\sqrt{3}t}\right) ,\left( \begin{matrix} 0 \\ - 1 \\ 1 \end{matrix}\right) \sin \left( {\sqrt{3}t}\right) \]\n\nare both solutions to the system of differential equations. You can find other oscillatory solutions in the same way by considering the other eigenvalues. You might try checking these answers to verify they work.
Yes
Lemma 7.4.1 Let \( \left\{ {{\mathbf{x}}_{1},\cdots ,{\mathbf{x}}_{n}}\right\} \) be a basis for \( {\mathbb{F}}^{n} \) . Then there exists an orthonormal basis for \( {\mathbb{F}}^{n},\left\{ {{\mathbf{u}}_{1},\cdots ,{\mathbf{u}}_{n}}\right\} \) which has the property that for each \( k \leq n \), span \( \left( {{\mathbf{x}}_{1},\cdots ,{\mathbf{x}}_{k}}\right) = \) \( \operatorname{span}\left( {{\mathbf{u}}_{1},\cdots ,{\mathbf{u}}_{k}}\right) \) .
Proof: Let \( \left\{ {{\mathbf{x}}_{1},\cdots ,{\mathbf{x}}_{n}}\right\} \) be a basis for \( {\mathbb{F}}^{n} \) . Let \( {\mathbf{u}}_{1} \equiv {\mathbf{x}}_{1}/\left| {\mathbf{x}}_{1}\right| \) . Thus for \( k = 1 \) , \( \operatorname{span}\left( {\mathbf{u}}_{1}\right) = \operatorname{span}\left( {\mathbf{x}}_{1}\right) \) and \( \left\{ {\mathbf{u}}_{1}\right\} \) is an orthonormal set. Now suppose for some \( k < n,{\mathbf{u}}_{1},\cdots \) , \( {\mathbf{u}}_{k} \) have been chosen such that \( \left( {{\mathbf{u}}_{j} \cdot {\mathbf{u}}_{l}}\right) = {\delta }_{jl} \) and \( \operatorname{span}\left( {{\mathbf{x}}_{1},\cdots ,{\mathbf{x}}_{k}}\right) = \operatorname{span}\left( {{\mathbf{u}}_{1},\cdots ,{\mathbf{u}}_{k}}\right) \) .\n\nThen define\n\[{\mathbf{u}}_{k + 1} \equiv \frac{{\mathbf{x}}_{k + 1} - \mathop{\sum }\limits_{{j = 1}}^{k}\left( {{\mathbf{x}}_{k + 1} \cdot {\mathbf{u}}_{j}}\right) {\mathbf{u}}_{j}}{\left| {\mathbf{x}}_{k + 1} - \mathop{\sum }\limits_{{j = 1}}^{k}\left( {\mathbf{x}}_{k + 1} \cdot {\mathbf{u}}_{j}\right) {\mathbf{u}}_{j}\right| },\]\n\n\( \left( {7.10}\right) \)\n\nwhere the denominator is not equal to zero because the \( {\mathbf{x}}_{j} \) form a basis and so\n\n\[{\mathbf{x}}_{k + 1} \notin \operatorname{span}\left( {{\mathbf{x}}_{1},\cdots ,{\mathbf{x}}_{k}}\right) = \operatorname{span}\left( {{\mathbf{u}}_{1},\cdots ,{\mathbf{u}}_{k}}\right)\]\n\nThus by induction,\n\n\[{\mathbf{u}}_{k + 1} \in \operatorname{span}\left( {{\mathbf{u}}_{1},\cdots ,{\mathbf{u}}_{k},{\mathbf{x}}_{k + 1}}\right) = \operatorname{span}\left( {{\mathbf{x}}_{1},\cdots ,{\mathbf{x}}_{k},{\mathbf{x}}_{k + 1}}\right) .\]\n\nAlso, \( {\mathbf{x}}_{k + 1} \in \operatorname{span}\left( {{\mathbf{u}}_{1},\cdots ,{\mathbf{u}}_{k},{\mathbf{u}}_{k + 1}}\right) \) which is seen easily by solving 7.10 for \( {\mathbf{x}}_{k + 1} \) and it follows\n\n\[\operatorname{span}\left( {{\mathbf{x}}_{1},\cdots ,{\mathbf{x}}_{k},{\mathbf{x}}_{k + 1}}\right) = \operatorname{span}\left( {{\mathbf{u}}_{1},\cdots ,{\mathbf{u}}_{k},{\mathbf{u}}_{k + 1}}\right) .\]\n\nIf \( l \leq k \) ,\n\n\[ \left( {{\mathbf{u}}_{k + 1} \cdot {\mathbf{u}}_{l}}\right) = C\left( {\left( {{\mathbf{x}}_{k + 1} \cdot {\mathbf{u}}_{l}}\right) - \mathop{\sum }\limits_{{j = 1}}^{k}\left( {{\mathbf{x}}_{k + 1} \cdot {\mathbf{u}}_{j}}\right) \left( {{\mathbf{u}}_{j} \cdot {\mathbf{u}}_{l}}\right) }\right) = \]\n\n\[ C\left( {\left( {{\mathbf{x}}_{k + 1} \cdot {\mathbf{u}}_{l}}\right) - \mathop{\sum }\limits_{{j = 1}}^{k}\left( {{\mathbf{x}}_{k + 1} \cdot {\mathbf{u}}_{j}}\right) {\delta }_{lj}}\right) = C\left( {\left( {{\mathbf{x}}_{k + 1} \cdot {\mathbf{u}}_{l}}\right) - \left( {{\mathbf{x}}_{k + 1} \cdot {\mathbf{u}}_{l}}\right) }\right) = 0. \]\n\nThe vectors, \( {\left\{ {\mathbf{u}}_{j}\right\} }_{j = 1}^{n} \), generated in this way are therefore an orthonormal basis because each vector has unit length.
Yes
Proposition 7.4.3 An \( n \times n \) matrix is unitary if and only if the columns are an orthonormal set.
Proof: This follows right away from the way we multiply matrices. If \( U \) is an \( n \times n \) complex matrix, then\n\n\[{\left( {U}^{ * }U\right) }_{ij} = {\mathbf{u}}_{i}^{ * }{\mathbf{u}}_{j} = \overline{\left( {\mathbf{u}}_{i},{\mathbf{u}}_{j}\right) }\n\]\n\nand the matrix is unitary if and only if this equals \( {\delta }_{ij} \) if and only if the columns are orthonormal. ∎
Yes
Theorem 7.4.4 Let \( A \) be an \( n \times n \) matrix. Then there exists a unitary matrix \( U \) such that\n\n\[ {U}^{ * }{AU} = T \]\n\n(7.11)\n\nwhere \( T \) is an upper triangular matrix having the eigenvalues of \( A \) on the main diagonal listed according to multiplicity as roots of the characteristic equation.
Proof: The theorem is clearly true if \( A \) is a \( 1 \times 1 \) matrix. Just let \( U = 1 \) the \( 1 \times 1 \) matrix which has 1 down the main diagonal and zeros elsewhere. Suppose it is true for \( \left( {n - 1}\right) \times \left( {n - 1}\right) \) matrices and let \( A \) be an \( n \times n \) matrix. Then let \( {\mathbf{v}}_{1} \) be a unit eigenvector for \( A \) . Then there exists \( {\lambda }_{1} \) such that\n\n\[ A{\mathbf{v}}_{1} = {\lambda }_{1}{\mathbf{v}}_{1},\left| {\mathbf{v}}_{1}\right| = 1 \]\n\nExtend \( \left\{ {\mathbf{v}}_{1}\right\} \) to a basis and then use Lemma 7.4.1 to obtain \( \left\{ {{\mathbf{v}}_{1},\cdots ,{\mathbf{v}}_{n}}\right\} \), an orthonormal basis in \( {\mathbb{F}}^{n} \) . Let \( {U}_{0} \) be a matrix whose \( {i}^{th} \) column is \( {\mathbf{v}}_{i} \) . Then from the above, it follows \( {U}_{0} \) is unitary. Then \( {U}_{0}^{ * }A{U}_{0} \) is of the form\n\n\[ \left( \begin{array}{llll} {\lambda }_{1} & * & \cdots & * \\ 0 & & & \\ \vdots & & {A}_{1} & \\ 0 & & & \end{array}\right) \]\n\nwhere \( {A}_{1} \) is an \( n - 1 \times n - 1 \) matrix. Now by induction there exists an \( \left( {n - 1}\right) \times \left( {n - 1}\right) \) unitary matrix \( {\widetilde{U}}_{1} \) such that\n\n\[ {\widetilde{U}}_{1}^{ * }{A}_{1}{\widetilde{U}}_{1} = {T}_{n - 1} \]\n\nan upper triangular matrix. Consider\n\n\[ {U}_{1} \equiv \left( \begin{matrix} 1 & \mathbf{0} \\ \mathbf{0} & {\widetilde{U}}_{1} \end{matrix}\right) \]\n\nThis is a unitary matrix and\n\n\[ {U}_{1}^{ * }{U}_{0}^{ * }A{U}_{0}{U}_{1} = \left( \begin{matrix} 1 & \mathbf{0} \\ \mathbf{0} & {\widetilde{U}}_{1}^{ * } \end{matrix}\right) \left( \begin{matrix} {\lambda }_{1} & * \\ \mathbf{0} & {A}_{1} \end{matrix}\right) \left( \begin{matrix} 1 & \mathbf{0} \\ \mathbf{0} & {\widetilde{U}}_{1} \end{matrix}\right) = \left( \begin{matrix} {\lambda }_{1} & * \\ \mathbf{0} & {T}_{n - 1} \end{matrix}\right) \equiv T \]\n\nwhere \( T \) is upper triangular. Then let \( U = {U}_{0}{U}_{1} \) . Since \( {\left( {U}_{0}{U}_{1}\right) }^{ * } = {U}_{1}^{ * }{U}_{0}^{ * } \), it follows \( A \) is similar to \( T \) and that \( {U}_{0}{U}_{1} \) is unitary. Hence \( A \) and \( T \) have the same characteristic polynomials and since the eigenvalues of \( T \) are the diagonal entries listed according to algebraic multiplicity, \( \blacksquare \)
Yes
Lemma 7.4.5 Let \( A \) be of the form\n\n\[ A = \left( \begin{matrix} {P}_{1} & \cdots & * \\ \vdots & \ddots & \vdots \\ 0 & \cdots & {P}_{s} \end{matrix}\right) \]\n\nwhere \( {P}_{k} \) is an \( {m}_{k} \times {m}_{k} \) matrix. Then\n\n\[ \det \left( A\right) = \mathop{\prod }\limits_{k}\det \left( {P}_{k}\right) \]\n\nAlso, the eigenvalues of \( A \) consist of the union of the eigenvalues of the \( {P}_{j} \) .
Proof: Let \( {U}_{k} \) be an \( {m}_{k} \times {m}_{k} \) unitary matrix such that\n\n\[ {U}_{k}^{ * }{P}_{k}{U}_{k} = {T}_{k} \]\n\nwhere \( {T}_{k} \) is upper triangular. Then it follows that for\n\n\[ U \equiv \left( \begin{matrix} {U}_{1} & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & {U}_{s} \end{matrix}\right) ,{U}^{ * } = \left( \begin{matrix} {U}_{1}^{ * } & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & {U}_{s}^{ * } \end{matrix}\right) \]\n\nand also\n\n\[ \left( \begin{matrix} {U}_{1}^{ * } & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & {U}_{s}^{ * } \end{matrix}\right) \left( \begin{matrix} {P}_{1} & \cdots & * \\ \vdots & \ddots & \vdots \\ 0 & \cdots & {P}_{s} \end{matrix}\right) \left( \begin{matrix} {U}_{1} & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & {U}_{s} \end{matrix}\right) = \left( \begin{matrix} {T}_{1} & \cdots & * \\ \vdots & \ddots & \vdots \\ 0 & \cdots & {T}_{s} \end{matrix}\right) . \]\n\nTherefore, since the determinant of an upper triangular matrix is the product of the diagonal entries,\n\n\[ \det \left( A\right) = \mathop{\prod }\limits_{k}\det \left( {T}_{k}\right) = \mathop{\prod }\limits_{k}\det \left( {P}_{k}\right) . \]\n\nFrom the above formula, the eigenvalues of \( A \) consist of the eigenvalues of the upper triangular matrices \( {T}_{k} \), and each \( {T}_{k} \) has the same eigenvalues as \( {P}_{k} \) .
Yes
Corollary 7.4.7 Let \( A \) be a real \( n \times n \) matrix having only real eigenvalues. Then there exists a real orthogonal matrix \( Q \) and an upper triangular matrix \( T \) such that\n\n\[ {Q}^{T}{AQ} = T \]\n\nand furthermore, if the eigenvalues of \( A \) are listed in decreasing order,\n\n\[ {\lambda }_{1} \geq {\lambda }_{2} \geq \cdots \geq {\lambda }_{n} \]\n\n\( Q \) can be chosen such that \( T \) is of the form\n\n\[ \left( \begin{matrix} {\lambda }_{1} & * & \cdots & * \\ 0 & {\lambda }_{2} & \ddots & \vdots \\ \vdots & \ddots & \ddots & * \\ 0 & \cdots & 0 & {\lambda }_{n} \end{matrix}\right) \]
Proof: Most of this follows right away from Theorem 7.4.6. It remains to verify the claim that the diagonal entries can be arranged in the desired order. However, this follows from a simple modification of the above argument. When you find \( {\mathbf{v}}_{1} \) the eigenvalue of \( {\lambda }_{1} \) , just be sure \( {\lambda }_{1} \) is chosen to be the largest eigenvalue. Then observe that from Lemma 7.4.5 applied to the characteristic equation, the eigenvalues of the \( \left( {n - 1}\right) \times \left( {n - 1}\right) \) matrix \( {A}_{1} \) are \( \left\{ {{\lambda }_{1},\cdots ,{\lambda }_{n}}\right\} \) . Then pick \( {\lambda }_{2} \) to continue the process of construction with \( {A}_{1} \) . ∎
Yes
Lemma 7.4.10 If \( T \) is upper triangular and normal, then \( T \) is a diagonal matrix.
Proof:This is obviously true if \( T \) is \( 1 \times 1 \) . In fact, it can’t help being diagonal in this case. Suppose then that the lemma is true for \( \left( {n - 1}\right) \times \left( {n - 1}\right) \) matrices and let \( T \) be an upper triangular normal \( n \times n \) matrix. Thus \( T \) is of the form\n\n\[ T = \left( \begin{matrix} {t}_{11} & {\mathbf{a}}^{ * } \\ \mathbf{0} & {T}_{1} \end{matrix}\right) ,{T}^{ * } = \left( \begin{matrix} \overline{{t}_{11}} & {\mathbf{0}}^{T} \\ \mathbf{a} & {T}_{1}^{ * } \end{matrix}\right) \]\n\nThen\n\n\[ T{T}^{ * } = \left( \begin{matrix} {t}_{11} & {\mathbf{a}}^{ * } \\ \mathbf{0} & {T}_{1} \end{matrix}\right) \left( \begin{matrix} \overline{{t}_{11}} & {\mathbf{0}}^{T} \\ \mathbf{a} & {T}_{1}^{ * } \end{matrix}\right) = \left( \begin{matrix} {\left| {t}_{11}\right| }^{2} + {\mathbf{a}}^{ * }\mathbf{a} & {\mathbf{a}}^{ * }{T}_{1}^{ * } \\ {T}_{1}\mathbf{a} & {T}_{1}{T}_{1}^{ * } \end{matrix}\right) \]\n\n\[ {T}^{ * }T = \left( \begin{matrix} \overline{{t}_{11}} & {\mathbf{0}}^{T} \\ \mathbf{a} & {T}_{1}^{ * } \end{matrix}\right) \left( \begin{matrix} {t}_{11} & {\mathbf{a}}^{ * } \\ \mathbf{0} & {T}_{1} \end{matrix}\right) = \left( \begin{matrix} {\left| {t}_{11}\right| }^{2} & \overline{{t}_{11}}{\mathbf{a}}^{ * } \\ \mathbf{a}{t}_{11} & \mathbf{a}{\mathbf{a}}^{ * } + {T}_{1}^{ * }{T}_{1} \end{matrix}\right) \]\n\nSince these two matrices are equal, it follows \( \mathbf{a} = \mathbf{0} \) . But now it follows that \( {T}_{1}^{ * }{T}_{1} = {T}_{1}{T}_{1}^{ * } \) and so by induction \( {T}_{1} \) is a diagonal matrix \( {D}_{1} \) . Therefore,\n\n\[ T = \left( \begin{matrix} {t}_{11} & {\mathbf{0}}^{T} \\ \mathbf{0} & {D}_{1} \end{matrix}\right) \]\n\na diagonal matrix.
Yes
Theorem 7.4.11 Let \( A \) be a normal matrix. Then there exists a unitary matrix \( U \) such that \( {U}^{ * }{AU} \) is a diagonal matrix.
Proof: From Theorem 7.4.4 there exists a unitary matrix \( U \) such that \( {U}^{ * }{AU} \) equals an upper triangular matrix. The theorem is now proved if it is shown that the property of being normal is preserved under unitary similarity transformations. That is, verify that if \( A \) is normal and if \( B = {U}^{ * }{AU} \), then \( B \) is also normal. But this is easy.\n\n\[ \n{B}^{ * }B = {U}^{ * }{A}^{ * }U{U}^{ * }{AU} = {U}^{ * }{A}^{ * }{AU} \]\n\n\[ \n= \;{U}^{ * }A{A}^{ * }U = {U}^{ * }{AU}{U}^{ * }{A}^{ * }U = B{B}^{ * }. \]\n\nTherefore, \( {U}^{ * }{AU} \) is a normal and upper triangular matrix and by Lemma 7.4.10 it must be a diagonal matrix. \( \blacksquare \)
Yes
Corollary 7.4.12 If \( A \) is Hermitian, then all the eigenvalues of \( A \) are real and there exists an orthonormal basis of eigenvectors.
Proof: Since \( A \) is normal, there exists unitary, \( U \) such that \( {U}^{ * }{AU} = D \), a diagonal matrix whose diagonal entries are the eigenvalues of \( A \) . Therefore, \( {D}^{ * } = {U}^{ * }{A}^{ * }U = {U}^{ * }{AU} = \) \( D \) showing \( D \) is real.\n\nFinally, let\n\n\[ U = \left( \begin{array}{llll} {\mathbf{u}}_{1} & {\mathbf{u}}_{2} & \cdots & {\mathbf{u}}_{n} \end{array}\right) \]\n\nwhere the \( {\mathbf{u}}_{i} \) denote the columns of \( U \) and\n\n\[ D = \left( \begin{matrix} {\lambda }_{1} & & 0 \\ & \ddots & \\ 0 & & {\lambda }_{n} \end{matrix}\right) \]\n\nThe equation, \( {U}^{ * }{AU} = D \) implies\n\n\[ {AU} = \left( \begin{array}{llll} A{\mathbf{u}}_{1} & A{\mathbf{u}}_{2} & \cdots & A{\mathbf{u}}_{n} \end{array}\right) \]\n\n\[ = {UD} = \left( \begin{array}{llll} {\lambda }_{1}{\mathbf{u}}_{1} & {\lambda }_{2}{\mathbf{u}}_{2} & \cdots & {\lambda }_{n}{\mathbf{u}}_{n} \end{array}\right) \]\n\nwhere the entries denote the columns of \( {AU} \) and \( {UD} \) respectively. Therefore, \( A{\mathbf{u}}_{i} = {\lambda }_{i}{\mathbf{u}}_{i} \) and since the matrix is unitary, the \( i{j}^{th} \) entry of \( {U}^{ * }U \) equals \( {\delta }_{ij} \) and so\n\n\[ {\delta }_{ij} = {\mathbf{u}}_{i}^{ * }{\mathbf{u}}_{j} \equiv {\mathbf{u}}_{j} \cdot {\mathbf{u}}_{i} \]\n\nThis proves the corollary because it shows the vectors \( \left\{ {\mathbf{u}}_{i}\right\} \) are orthonormal. Therefore, they form a basis because every orthonormal set of vectors is linearly independent. ∎
Yes
Corollary 7.4.13 If \( A \) is a real symmetric matrix, then \( A \) is Hermitian and there exists a real unitary matrix \( U \) such that \( {U}^{T}{AU} = D \) where \( D \) is a diagonal matrix whose diagonal entries are the eigenvalues of \( A \) . By arranging the columns of \( U \) the diagonal entries of \( D \) can be made to appear in any order.
Proof: This follows from Theorem 7.4.6 and Corollary 7.4.12. Let\n\n\[ U = \left( \begin{array}{lll} {\mathbf{u}}_{1} & \cdots & {\mathbf{u}}_{n} \end{array}\right) \]\n\nThen \( {AU} = {UD} \) so\n\n\[ {AU} = \left( \begin{array}{lll} A{\mathbf{u}}_{1} & \cdots & A{\mathbf{u}}_{n} \end{array}\right) = \left( \begin{array}{lll} {\mathbf{u}}_{1} & \cdots & {\mathbf{u}}_{n} \end{array}\right) D = \left( \begin{array}{lll} {\lambda }_{1}{\mathbf{u}}_{1} & \cdots & {\lambda }_{n}{\mathbf{u}}_{n} \end{array}\right) \]\n\nHence each column of \( U \) is an eigenvector of \( A \) . It follows that by rearranging these columns, the entries of \( D \) on the main diagonal can be made to appear in any order. To see this, consider such a rearrangement resulting in an orthogonal matrix \( {U}^{\prime } \) given by\n\n\[ {U}^{\prime } = \left( \begin{array}{lll} {\mathbf{u}}_{{i}_{1}} & \cdots & {\mathbf{u}}_{{i}_{n}} \end{array}\right) \]\n\nThen\n\n\[ {U}^{\prime T}A{U}^{\prime } = {U}^{\prime T}\left( \begin{array}{lll} A{\mathbf{u}}_{{i}_{1}} & \cdots & A{\mathbf{u}}_{{i}_{n}} \end{array}\right) \]\n\n\[ = \left( \begin{matrix} {\mathbf{u}}_{{i}_{1}}^{T} \\ \vdots \\ {\mathbf{u}}_{{i}_{n}}^{T} \end{matrix}\right) \left( \begin{array}{lll} {\lambda }_{{i}_{1}}{\mathbf{u}}_{{i}_{1}} & \cdots & {\lambda }_{{i}_{n}}{\mathbf{u}}_{{i}_{n}} \end{array}\right) = \left( \begin{matrix} {\lambda }_{{i}_{1}} & & 0 \\ & \ddots & \\ 0 & & {\lambda }_{{i}_{n}} \end{matrix}\right) \blacksquare \]
Yes
Theorem 7.5.2 Let \( A \) be an \( n \times n \) matrix. Then \( \operatorname{trace}\left( A\right) \) equals the sum of the eigenvalues of \( A \) and \( \det \left( A\right) \) equals the product of the eigenvalues of \( A \) .
This is proved using Schur's theorem and is in Problem 12 below.
No
Theorem 7.5.3 Let \( A \) be an \( m \times n \) matrix and let \( B \) be an \( n \times m \) matrix. Then\n\n\[ \operatorname{trace}\left( {AB}\right) = \operatorname{trace}\left( {BA}\right) . \]
\[ \operatorname{trace}\left( {AB}\right) \equiv \mathop{\sum }\limits_{i}\left( {\mathop{\sum }\limits_{k}{A}_{ik}{B}_{ki}}\right) = \mathop{\sum }\limits_{k}\mathop{\sum }\limits_{i}{B}_{ki}{A}_{ik} = \operatorname{trace}\left( {BA}\right) \]
Yes
Theorem 7.7.1 Suppose \( f : U \subseteq {\mathbb{F}}^{2} \rightarrow \mathbb{R} \) where \( U \) is an open set on which \( {f}_{x},{f}_{y},{f}_{xy} \) and \( {f}_{yx} \) exist. Then if \( {f}_{xy} \) and \( {f}_{yx} \) are continuous at the point \( \left( {x, y}\right) \in U \), it follows\n\n\[ \n{f}_{xy}\left( {x, y}\right) = {f}_{yx}\left( {x, y}\right) .\n\]
Proof: Since \( U \) is open, there exists \( r > 0 \) such that \( B\left( {\left( {x, y}\right), r}\right) \subseteq U \) . Now let \( \left| t\right| ,\left| s\right| < \) \( r/2, t, s \) real numbers and consider\n\n\[ \n\Delta \left( {s, t}\right) \equiv \frac{1}{st}\{ \overset{h\left( t\right) }{\overbrace{f\left( {x + t, y + s}\right) - f\left( {x + t, y}\right) }} - \overset{h\left( t\right) }{\overbrace{\left( f\left( x, y + s\right) - f\left( x, y\right) \right) }}\} .\n\]\n\n(7.16)\n\nNote that \( \left( {x + t, y + s}\right) \in U \) because\n\n\[ \n\left| {\left( {x + t, y + s}\right) - \left( {x, y}\right) }\right| = \left| \left( {t, s}\right) \right| = {\left( {t}^{2} + {s}^{2}\right) }^{1/2}\n\]\n\n\[ \n\leq {\left( \frac{{r}^{2}}{4} + \frac{{r}^{2}}{4}\right) }^{1/2} = \frac{r}{\sqrt{2}} < r\n\]\n\nAs implied above, \( h\left( t\right) \equiv f\left( {x + t, y + s}\right) - f\left( {x + t, y}\right) \) . Therefore, by the mean value theorem from calculus and the (one variable) chain rule,\n\n\[ \n\Delta \left( {s, t}\right) = \frac{1}{st}\left( {h\left( t\right) - h\left( 0\right) }\right) = \frac{1}{st}{h}^{\prime }\left( {\alpha t}\right) t\n\]\n\n\[ \n= \frac{1}{s}\left( {{f}_{x}\left( {x + {\alpha t}, y + s}\right) - {f}_{x}\left( {x + {\alpha t}, y}\right) }\right)\n\]\n\nfor some \( \alpha \in \left( {0,1}\right) \) . Applying the mean value theorem again,\n\n\[ \n\Delta \left( {s, t}\right) = {f}_{xy}\left( {x + {\alpha t}, y + {\beta s}}\right)\n\]\n\nwhere \( \alpha ,\beta \in \left( {0,1}\right) \) .\n\nIf the terms \( f\left( {x + t, y}\right) \) and \( f\left( {x, y + s}\right) \) are interchanged in (7.16), \( \Delta \left( {s, t}\right) \) is unchanged and the above argument shows there exist \( \gamma ,\delta \in \left( {0,1}\right) \) such that\n\n\[ \n\Delta \left( {s, t}\right) = {f}_{yx}\left( {x + {\gamma t}, y + {\delta s}}\right) .\n\]\n\nLetting \( \left( {s, t}\right) \rightarrow \left( {0,0}\right) \) and using the continuity of \( {f}_{xy} \) and \( {f}_{yx} \) at \( \left( {x, y}\right) \),\n\n\[ \n\mathop{\lim }\limits_{{\left( {s, t}\right) \rightarrow \left( {0,0}\right) }}\Delta \left( {s, t}\right) = {f}_{xy}\left( {x, y}\right) = {f}_{yx}\left( {x, y}\right) .\n\]
Yes
Corollary 7.7.2 Suppose \( U \) is an open subset of \( {\mathbb{F}}^{n} \) and \( f : U \rightarrow \mathbb{R} \) has the property that for two indices, \( k, l,{f}_{{x}_{k}},{f}_{{x}_{l}},{f}_{{x}_{l}{x}_{k}} \), and \( {f}_{{x}_{k}{x}_{l}} \) exist on \( U \) and \( {f}_{{x}_{k}{x}_{l}} \) and \( {f}_{{x}_{l}{x}_{k}} \) are both continuous at \( \mathbf{x} \in U \) . Then \( {f}_{{x}_{k}{x}_{l}}\left( \mathbf{x}\right) = {f}_{{x}_{l}{x}_{k}}\left( \mathbf{x}\right) \) .
Thus the theorem asserts that the mixed partial derivatives are equal at \( \mathbf{x} \) if they are defined near \( \mathbf{x} \) and continuous at \( \mathbf{x} \) .
Yes
Theorem 7.7.3 Suppose \( f \) has \( n + 1 \) derivatives on an interval, \( \left( {a, b}\right) \) and let \( c \in \left( {a, b}\right) \) . Then if \( x \in \left( {a, b}\right) \), there exists \( \xi \) between \( c \) and \( x \) such that\n\n\[ f\left( x\right) = f\left( c\right) + \mathop{\sum }\limits_{{k = 1}}^{n}\frac{{f}^{\left( k\right) }\left( c\right) }{k!}{\left( x - c\right) }^{k} + \frac{{f}^{\left( n + 1\right) }\left( \xi \right) }{\left( {n + 1}\right) !}{\left( x - c\right) }^{n + 1}. \]
Proof: If \( n = 0 \) then the theorem is true because it is just the mean value theorem. Suppose the theorem is true for \( n - 1, n \geq 1 \) . It can be assumed \( x \neq c \) because if \( x = c \) there is nothing to show. Then there exists \( K \) such that\n\n\[ f\left( x\right) - \left( {f\left( c\right) + \mathop{\sum }\limits_{{k = 1}}^{n}\frac{{f}^{\left( k\right) }\left( c\right) }{k!}{\left( x - c\right) }^{k} + K{\left( x - c\right) }^{n + 1}}\right) = 0 \]\n\n(7.17)\n\nIn fact,\n\n\[ K = \frac{-f\left( x\right) + \left( {f\left( c\right) + \mathop{\sum }\limits_{{k = 1}}^{n}\frac{{f}^{\left( k\right) }\left( c\right) }{k!}{\left( x - c\right) }^{k}}\right) }{{\left( x - c\right) }^{n + 1}}. \]\n\nNow define \( F\left( t\right) \) for \( t \) in the closed interval determined by \( x \) and \( c \) by\n\n\[ F\left( t\right) \equiv f\left( x\right) - \left( {f\left( t\right) + \mathop{\sum }\limits_{{k = 1}}^{n}\frac{{f}^{\left( k\right) }\left( c\right) }{k!}{\left( x - t\right) }^{k} + K{\left( x - t\right) }^{n + 1}}\right) . \]\n\nThe \( c \) in (7.17) got replaced by \( t \) .\n\nTherefore, \( F\left( c\right) = 0 \) by the way \( K \) was chosen and also \( F\left( x\right) = 0 \) . By the mean value theorem or Rolle’s theorem, there exists \( {t}_{1} \) between \( x \) and \( c \) such that \( {F}^{\prime }\left( {t}_{1}\right) = 0 \) . Therefore,\n\n\[ 0 = {f}^{\prime }\left( {t}_{1}\right) - \mathop{\sum }\limits_{{k = 1}}^{n}\frac{{f}^{\left( k\right) }\left( c\right) }{k!}k{\left( x - {t}_{1}\right) }^{k - 1} - K\left( {n + 1}\right) {\left( x - {t}_{1}\right) }^{n} \]\n\n\[ = {f}^{\prime }\left( {t}_{1}\right) - \left( {{f}^{\prime }\left( c\right) + \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}\frac{{f}^{\left( k + 1\right) }\left( c\right) }{k!}{\left( x - {t}_{1}\right) }^{k}}\right) - K\left( {n + 1}\right) {\left( x - {t}_{1}\right) }^{n} \]\n\n\[ = {f}^{\prime }\left( {t}_{1}\right) - \left( {{f}^{\prime }\left( c\right) + \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}\frac{{f}^{\prime \left( k\right) }\left( c\right) }{k!}{\left( x - {t}_{1}\right) }^{k}}\right) - K\left( {n + 1}\right) {\left( x - {t}_{1}\right) }^{n} \]\n\nBy induction applied to \( {f}^{\prime } \), there exists \( \xi \) between \( x \) and \( {t}_{1} \) such that the above simplifies to\n\n\[ 0 = \frac{{f}^{\prime \left( n\right) }\left( \xi \right) {\left( x - {t}_{1}\right) }^{n}}{n!} - K\left( {n + 1}\right) {\left( x - {t}_{1}\right) }^{n} \]\n\n\[ = \frac{{f}^{\left( n + 1\right) }\left( \xi \right) {\left( x - {t}_{1}\right) }^{n}}{n!} - K\left( {n + 1}\right) {\left( x - {t}_{1}\right) }^{n} \]\n\n\ntherefore,\n\n\[ K = \frac{{f}^{\left( n + 1\right) }\left( \xi \right) }{\left( {n + 1}\right) n!} = \frac{{f}^{\left( n + 1\right) }\left( \xi \right) }{\left( {n + 1}\right) !} \]\n\nand the formula is true for \( n \) . -
Yes
Theorem 7.7.5 Let \( f : U \rightarrow \mathbb{R} \) and let \( f \in {C}^{2}\left( U\right) \) . Then if\n\n\[ B\left( {\mathbf{x}, r}\right) \subseteq U \]\n\nand \( \parallel \mathbf{v}\parallel < r \), there exists \( t \in \left( {0,1}\right) \) such that.\n\n\[ f\left( {\mathbf{x} + \mathbf{v}}\right) = f\left( \mathbf{x}\right) + \mathop{\sum }\limits_{{k = 1}}^{n}\frac{\partial f}{\partial {x}_{k}}\left( \mathbf{x}\right) {v}_{k} + \frac{1}{2}\mathop{\sum }\limits_{{k = 1}}^{n}\mathop{\sum }\limits_{{j = 1}}^{n}\frac{{\partial }^{2}f}{\partial {x}_{j}\partial {x}_{k}}\left( {\mathbf{x} + t\mathbf{v}}\right) {v}_{k}{v}_{j} \]
Definition 7.7.6 Define the following matrix.\n\n\[ {H}_{ij}\left( {\mathbf{x} + t\mathbf{v}}\right) \equiv \frac{{\partial }^{2}f\left( {\mathbf{x} + t\mathbf{v}}\right) }{\partial {x}_{j}\partial {x}_{i}}. \]\n\nIt is called the Hessian matrix. From Corollary 7.7.2, this is a symmetric matrix. Then in terms of this matrix, (7.18) can be written as\n\n\[ f\left( {\mathbf{x} + \mathbf{v}}\right) = f\left( \mathbf{x}\right) + \mathop{\sum }\limits_{{j = 1}}^{n}\frac{\partial f}{\partial {x}_{j}}\left( \mathbf{x}\right) {v}_{k} + \frac{1}{2}{\mathbf{v}}^{T}H\left( {\mathbf{x} + t\mathbf{v}}\right) \mathbf{v} \]\n\nThen this implies \( f\left( {\mathbf{x} + \mathbf{v}}\right) = \)\n\n\[ f\left( \mathbf{x}\right) + \mathop{\sum }\limits_{{j = 1}}^{n}\frac{\partial f}{\partial {x}_{j}}\left( \mathbf{x}\right) {v}_{k} + \frac{1}{2}{\mathbf{v}}^{T}H\left( \mathbf{x}\right) \mathbf{v} + \frac{1}{2}\left( {{\mathbf{v}}^{T}\left( {H\left( {\mathbf{x} + t\mathbf{v}}\right) - H\left( \mathbf{x}\right) }\right) \mathbf{v}}\right) . \]
No
Theorem 7.7.7 In the above situation, suppose \( {f}_{{x}_{j}}\left( \mathbf{x}\right) = 0 \) for each \( {x}_{j} \) . Then if \( H\left( \mathbf{x}\right) \) has all positive eigenvalues, \( \mathbf{x} \) is a local minimum for \( \dot{f} \) . If \( H\left( \mathbf{x}\right) \) has all negative eigenvalues, then \( \mathbf{x} \) is a local maximum. If \( H\left( \mathbf{x}\right) \) has a positive eigenvalue, then there exists a direction in which \( f \) has a local minimum at \( \mathbf{x} \), while if \( H\left( \mathbf{x}\right) \) has a negative eigenvalue, there exists a direction in which \( H\left( \mathbf{x}\right) \) has a local maximum at \( \mathbf{x} \) .
Proof: Since \( {f}_{{x}_{j}}\left( \mathbf{x}\right) = 0 \) for each \( {x}_{j} \), formula (7.19) implies\n\n\[ f\left( {\mathbf{x} + \mathbf{v}}\right) = f\left( \mathbf{x}\right) + \frac{1}{2}{\mathbf{v}}^{T}H\left( \mathbf{x}\right) \mathbf{v} + \frac{1}{2}\left( {{\mathbf{v}}^{T}\left( {H\left( {\mathbf{x} + t\mathbf{v}}\right) - H\left( \mathbf{x}\right) }\right) \mathbf{v}}\right) \]\n\nwhere \( H\left( \mathbf{x}\right) \) is a symmetric matrix. Thus, by Corollary 7.4.12 \( H\left( \mathbf{x}\right) \) has all real eigenvalues. Suppose first that \( H\left( \mathbf{x}\right) \) has all positive eigenvalues and that all are larger than \( {\delta }^{2} > 0 \) . Then \( H\left( \mathbf{x}\right) \) has an orthonormal basis of eigenvectors, \( {\left\{ {\mathbf{v}}_{i}\right\} }_{i = 1}^{n} \) and if \( \mathbf{u} \) is an arbitrary vector, \( \mathbf{u} = \mathop{\sum }\limits_{{j = 1}}^{n}{u}_{j}{\mathbf{v}}_{j} \) where \( {u}_{j} = \mathbf{u} \cdot {\mathbf{v}}_{j} \) . Thus\n\n\[ {\mathbf{u}}^{T}H\left( \mathbf{x}\right) \mathbf{u} = \left( {\mathop{\sum }\limits_{{k = 1}}^{n}{u}_{k}{\mathbf{v}}_{k}^{T}}\right) H\left( \mathbf{x}\right) \left( {\mathop{\sum }\limits_{{j = 1}}^{n}{u}_{j}{\mathbf{v}}_{j}}\right) = \mathop{\sum }\limits_{{j = 1}}^{n}{u}_{j}^{2}{\lambda }_{j} \geq {\delta }^{2}\mathop{\sum }\limits_{{j = 1}}^{n}{u}_{j}^{2} = {\delta }^{2}{\left| \mathbf{u}\right| }^{2}. \]\n\nFrom (7.19) and the continuity of \( H \), if \( \mathbf{v} \) is small enough,\n\n\[ f\left( {\mathbf{x} + \mathbf{v}}\right) \geq f\left( \mathbf{x}\right) + \frac{1}{2}{\delta }^{2}{\left| \mathbf{v}\right| }^{2} - \frac{1}{4}{\delta }^{2}{\left| \mathbf{v}\right| }^{2} = f\left( \mathbf{x}\right) + \frac{{\delta }^{2}}{4}{\left| \mathbf{v}\right| }^{2}. \]\n\nThis shows the first claim of the theorem. The second claim follows from similar reasoning. Suppose \( H\left( \mathbf{x}\right) \) has a positive eigenvalue \( {\lambda }^{2} \) . Then let \( \mathbf{v} \) be an eigenvector for this eigenvalue. From (7.19),\n\n\[ f\left( {\mathbf{x} + t\mathbf{v}}\right) = f\left( \mathbf{x}\right) + \frac{1}{2}{t}^{2}{\mathbf{v}}^{T}H\left( \mathbf{x}\right) \mathbf{v} + \frac{1}{2}{t}^{2}\left( {{\mathbf{v}}^{T}\left( {H\left( {\mathbf{x} + t\mathbf{v}}\right) - H\left( \mathbf{x}\right) }\right) \mathbf{v}}\right) \]\n\nwhich implies\n\n\[ f\left( {\mathbf{x} + t\mathbf{v}}\right) = f\left( \mathbf{x}\right) + \frac{1}{2}{t}^{2}{\lambda }^{2}{\left| \mathbf{v}\right| }^{2} + \frac{1}{2}{t}^{2}\left( {{\mathbf{v}}^{T}\left( {H\left( {\mathbf{x} + t\mathbf{v}}\right) - H\left( \mathbf{x}\right) }\right) \mathbf{v}}\right) \]\n\n\[ \geq f\left( \mathbf{x}\right) + \frac{1}{4}{t}^{2}{\lambda }^{2}{\left| \mathbf{v}\right| }^{2} \]\n\nwhenever \( t \) is small enough. Thus in the direction \( \mathbf{v} \) the function has a local minimum at \( \mathbf{x} \) . The assertion about the local maximum in some direction follows similarly. ∎
Yes
Theorem 7.8.1 Let \( A \) be an \( n \times n \) matrix. Consider the \( n \) Gerschgorin discs defined as\n\n\[ \n{D}_{i} \equiv \left\{ {\lambda \in \mathbb{C} : \left| {\lambda - {a}_{ii}}\right| \leq \mathop{\sum }\limits_{{j \neq i}}\left| {a}_{ij}\right| }\right\} .\n\]\n\nThen every eigenvalue is contained in some Gerschgorin disc.
Proof: Suppose \( A\mathbf{x} = \lambda \mathbf{x} \) where \( \mathbf{x} \neq \mathbf{0} \) . Then for \( A = \left( {a}_{ij}\right) \)\n\n\[ \n\mathop{\sum }\limits_{{j \neq i}}{a}_{ij}{x}_{j} = \left( {\lambda - {a}_{ii}}\right) {x}_{i}\n\]\n\nTherefore, picking \( k \) such that \( \left| {x}_{k}\right| \geq \left| {x}_{j}\right| \) for all \( {x}_{j} \), it follows that \( \left| {x}_{k}\right| \neq 0 \) since \( \left| \mathbf{x}\right| \neq 0 \) and\n\n\[ \n\left| {x}_{k}\right| \mathop{\sum }\limits_{{j \neq i}}\left| {a}_{ij}\right| \geq \mathop{\sum }\limits_{{j \neq i}}\left| {a}_{ij}\right| \left| {x}_{j}\right| \geq \left| {\lambda - {a}_{ii}}\right| \left| {x}_{k}\right| .\n\]\n\nNow dividing by \( \left| {x}_{k}\right| \), it follows \( \lambda \) is contained in the \( {k}^{th} \) Gerschgorin disc.
Yes
Example 7.8.2 Here is a matrix. Estimate its eigenvalues.
According to Gerschgorin's theorem the eigenvalues are contained in the disks \[ {D}_{1} = \{ \lambda \in \mathbb{C} : \left| {\lambda - 2}\right| \leq 2\} ,{D}_{2} = \{ \lambda \in \mathbb{C} : \left| {\lambda - 5}\right| \leq 3\} ,\] \[ {D}_{3} = \{ \lambda \in \mathbb{C} : \left| {\lambda - 9}\right| \leq 1\} \] It is important to observe that these disks are in the complex plane. In general this is the case. If you want to find eigenvalues they will be complex numbers. So what are the values of the eigenvalues? In this case they are real. You can compute them by graphing the characteristic polynomial, \( {\lambda }^{3} - {16}{\lambda }^{2} + {70\lambda } - {66} \) and then zooming in on the zeros. If you do this you find the solution is \( \{ \lambda = {1.2953}\} ,\{ \lambda = {5.5905}\} \) , \( \{ \lambda = {9.1142}\} \) . Of course these are only approximations and so this information is useless for finding eigenvectors. However, in many applications, it is the size of the eigenvalues which is important and so these numerical values would be helpful for such applications. In this case, you might think there is no real reason for Gerschgorin's theorem. Why not just compute the characteristic equation and graph and zoom? This is fine up to a point, but what if the matrix was huge? Then it might be hard to find the characteristic polynomial. Remember the difficulties in expanding a big matrix along a row or column. Also, what if the eigenvalues were complex? You don't see these by following this procedure. However, Gerschgorin's theorem will at least estimate them.
Yes
Theorem 7.9.1 Let \( U \) be a region and let \( \gamma : \left\lbrack {a, b}\right\rbrack \rightarrow U \) be closed, continuous, bounded variation, and the winding number, \( n\left( {\gamma, z}\right) = 0 \) for all \( z \notin U \) . Suppose also that \( f \) is analytic on \( U \) having zeros \( {a}_{1},\cdots ,{a}_{m} \) where the zeros are repeated according to multiplicity, and suppose that none of these zeros are on \( \gamma \left( \left\lbrack {a, b}\right\rbrack \right) \) . Then\n\n\[ \n\frac{1}{2\pi i}{\int }_{\gamma }\frac{{f}^{\prime }\left( z\right) }{f\left( z\right) }{dz} = \mathop{\sum }\limits_{{k = 1}}^{m}n\left( {\gamma ,{a}_{k}}\right) .\n\]
Proof: It is given that \( f\left( z\right) = \mathop{\prod }\limits_{{j = 1}}^{m}\left( {z - {a}_{j}}\right) g\left( z\right) \) where \( g\left( z\right) \neq 0 \) on \( U \) . Hence using the product rule,\n\n\[ \n\frac{{f}^{\prime }\left( z\right) }{f\left( z\right) } = \mathop{\sum }\limits_{{j = 1}}^{m}\frac{1}{z - {a}_{j}} + \frac{{g}^{\prime }\left( z\right) }{g\left( z\right) }\n\]\n\nwhere \( \frac{{g}^{\prime }\left( z\right) }{g\left( z\right) } \) is analytic on \( U \) and so\n\n\[ \n\frac{1}{2\pi i}{\int }_{\gamma }\frac{{f}^{\prime }\left( z\right) }{f\left( z\right) }{dz} = \mathop{\sum }\limits_{{j = 1}}^{m}n\left( {\gamma ,{a}_{j}}\right) + \frac{1}{2\pi i}{\int }_{\gamma }\frac{{g}^{\prime }\left( z\right) }{g\left( z\right) }{dz} = \mathop{\sum }\limits_{{j = 1}}^{m}n\left( {\gamma ,{a}_{j}}\right) .\blacksquare\n\]
Yes
Theorem 7.9.4 Suppose \( A\left( t\right) \) is an \( n \times n \) matrix and that \( t \rightarrow A\left( t\right) \) is continuous for \( t \in \left\lbrack {0,1}\right\rbrack \) . Let \( \lambda \left( 0\right) \in \sigma \left( {A\left( 0\right) }\right) \) and define \( \sum \equiv { \cup }_{t \in \left\lbrack {0,1}\right\rbrack }\sigma \left( {A\left( t\right) }\right) \) . Let \( {K}_{\lambda \left( 0\right) } = {K}_{0} \) denote the connected component of \( \lambda \left( 0\right) \) in \( \sum \) . Then \( {K}_{0} \cap \sigma \left( {A\left( t\right) }\right) \neq \varnothing \) for all \( t \in \left\lbrack {0,1}\right\rbrack \) .
Proof: Let \( S \equiv \left\{ {t \in \left\lbrack {0,1}\right\rbrack : {K}_{0} \cap \sigma \left( {A\left( s\right) }\right) \neq \varnothing }\right. \) for all \( \left. {s \in \left\lbrack {0, t}\right\rbrack }\right\} \) . Then \( 0 \in S \) . Let \( {t}_{0} = \) \( \sup \left( S\right) \) . Say \( \sigma \left( {A\left( {t}_{0}\right) }\right) = {\lambda }_{1}\left( {t}_{0}\right) ,\cdots ,{\lambda }_{r}\left( {t}_{0}\right) \) .\n\nClaim: At least one of these is a limit point of \( {K}_{0} \) and consequently must be in \( {K}_{0} \) which shows that \( S \) has a last point. Why is this claim true? Let \( {s}_{n} \uparrow {t}_{0} \) so \( {s}_{n} \in S \) . Now let the discs, \( D\left( {{\lambda }_{i}\left( {t}_{0}\right) ,\delta }\right), i = 1,\cdots, r \) be disjoint with \( {p}_{A\left( {t}_{0}\right) } \) having no zeroes on \( {\gamma }_{i} \) the boundary of \( D\left( {{\lambda }_{i}\left( {t}_{0}\right) ,\delta }\right) \) . Then for \( n \) large enough it follows from Theorem 7.9.1 and the discussion following it that \( \sigma \left( {A\left( {s}_{n}\right) }\right) \) is contained in \( { \cup }_{i = 1}^{r}D\left( {{\lambda }_{i}\left( {t}_{0}\right) ,\delta }\right) \) . It follows that \( {K}_{0} \cap \left( {\sigma \left( {A\left( {t}_{0}\right) }\right) + D\left( {0,\delta }\right) }\right) \neq \varnothing \) for all \( \delta \) small enough. This requires at least one of the \( {\lambda }_{i}\left( {t}_{0}\right) \) to be in \( \overline{{K}_{0}} \) . Therefore, \( {t}_{0} \in S \) and \( S \) has a last point.\n\nNow by Lemma 7.9.3, if \( {t}_{0} < 1 \), then \( {K}_{0} \cup {K}_{t} \) would be a strictly larger connected set containing \( \lambda \left( 0\right) \) . (The reason this would be strictly larger is that \( {K}_{0} \cap \sigma \left( {A\left( s\right) }\right) = \varnothing \) for some \( s \in \left( {t, t + \eta }\right) \) while \( {K}_{t} \cap \sigma \left( {A\left( s\right) }\right) \neq \varnothing \) for all \( s \in \left\lbrack {t, t + \eta }\right\rbrack \) .) Therefore, \( {t}_{0} = 1 \) . \( \blacksquare \)
Yes
Corollary 7.9.5 Suppose one of the Gerschgorin discs, \( {D}_{i} \) is disjoint from the union of the others. Then \( {D}_{i} \) contains an eigenvalue of \( A \) . Also, if there are \( n \) disjoint Gerschgorin discs, then each one contains an eigenvalue of \( A \) .
Proof: Denote by \( A\left( t\right) \) the matrix \( \left( {a}_{ij}^{t}\right) \) where if \( i \neq j,{a}_{ij}^{t} = t{a}_{ij} \) and \( {a}_{ii}^{t} = {a}_{ii} \) . Thus to get \( A\left( t\right) \) multiply all non diagonal terms by \( t \) . Let \( t \in \left\lbrack {0,1}\right\rbrack \) . Then \( A\left( 0\right) = \operatorname{diag}\left( {{a}_{11},\cdots ,{a}_{nn}}\right) \) and \( A\left( 1\right) = A \) . Furthermore, the map, \( t \rightarrow A\left( t\right) \) is continuous. Denote by \( {D}_{j}^{t} \) the Ger-schgorin disc obtained from the \( {j}^{th} \) row for the matrix \( A\left( t\right) \) . Then it is clear that \( {D}_{j}^{t} \subseteq {D}_{j} \) the \( {j}^{\text{th }} \) Gerschgorin disc for \( A \) . It follows \( {a}_{ii} \) is the eigenvalue for \( A\left( 0\right) \) which is contained in the disc, consisting of the single point \( {a}_{ii} \) which is contained in \( {D}_{i} \) . Letting \( K \) be the connected component in \( \sum \) for \( \sum \) defined in Theorem 7.9.4 which is determined by \( {a}_{ii} \), Ger-schgorin’s theorem implies that \( K \cap \sigma \left( {A\left( t\right) }\right) \subseteq { \cup }_{j = 1}^{n}{D}_{j}^{t} \subseteq { \cup }_{j = 1}^{n}{D}_{j} = {D}_{i} \cup \left( {{ \cup }_{j \neq i}{D}_{j}}\right) \) and also, since \( K \) is connected, there are not points of \( K \) in both \( {D}_{i} \) and \( \left( {{ \cup }_{j \neq i}{D}_{j}}\right) \) . Since at least one point of \( K \) is in \( {D}_{i},\left( {a}_{ii}\right) \), it follows all of \( K \) must be contained in \( {D}_{i} \) . Now by Theorem 7.9.4 this shows there are points of \( K \cap \sigma \left( A\right) \) in \( {D}_{i} \) . The last assertion follows immediately.
Yes
Corollary 7.9.7 Suppose one of the Gerschgorin discs, \( {D}_{i} \) is disjoint from the union of the others. Then \( {D}_{i} \) contains exactly one eigenvalue of \( A \) and this eigenvalue is a simple root to the characteristic polynomial of \( A \) .
Proof: In the proof of Corollary 7.9.5, note that \( {a}_{ii} \) is a simple root of \( A\left( 0\right) \) since otherwise the \( {i}^{th} \) Gerschgorin disc would not be disjoint from the others. Also, \( K \), the connected component determined by \( {a}_{ii} \) must be contained in \( {D}_{i} \) because it is connected and by Gerschgorin’s theorem above, \( K \cap \sigma \left( {A\left( t\right) }\right) \) must be contained in the union of the Gerschgorin discs. Since all the other eigenvalues of \( A\left( 0\right) \), the \( {a}_{jj} \), are outside \( {D}_{i} \), it follows that \( K \cap \sigma \left( {A\left( 0\right) }\right) = {a}_{ii} \) . Therefore, by Lemma 7.9.6, \( K \cap \sigma \left( {A\left( 1\right) }\right) = K \cap \sigma \left( A\right) \) consists of a single simple eigenvalue.
Yes
Example 7.9.8 Consider the matrix \[ \left( \begin{array}{lll} 5 & 1 & 0 \\ 1 & 1 & 1 \\ 0 & 1 & 0 \end{array}\right) \] The Gerschgorin discs are \( D\left( {5,1}\right), D\left( {1,2}\right) \), and \( D\left( {0,1}\right) \). Observe \( D\left( {5,1}\right) \) is disjoint from the other discs. Therefore, there should be an eigenvalue in \( D\left( {5,1}\right) \).
The actual eigenvalues are not easy to find. They are the roots of the characteristic equation, \( {t}^{3} - 6{t}^{2} + \) \( {3t} + 5 = 0 \). The numerical values of these are \( - {.66966},{1.4231} \), and 5.24655, verifying the predictions of Gerschgorin's theorem.
Yes
Theorem 7.10.1 Suppose \( \Phi \left( t\right) \) is an \( n \times n \) matrix which satisfies \( {\Phi }^{\prime }\left( t\right) = {A\Phi }\left( t\right) \) . Then the general solution to (7.24) is \( \Phi \left( t\right) \mathbf{c} \) if and only if \( \Phi {\left( t\right) }^{-1} \) exists for some \( t \) . Furthermore, if \( {\Phi }^{\prime }\left( t\right) = {A\Phi }\left( t\right) \), then either \( \Phi {\left( t\right) }^{-1} \) exists for all \( t \) or \( \Phi {\left( t\right) }^{-1} \) never exists for any \( t \) .
Hint: Suppose first the general solution is of the form \( \Phi \left( t\right) \mathbf{c} \) where \( \mathbf{c} \) is an arbitrary constant vector in \( {\mathbb{F}}^{n} \) . You need to verify \( \Phi {\left( t\right) }^{-1} \) exists for some \( t \) . In fact, show \( \Phi {\left( t\right) }^{-1} \) exists for every \( t \) . Suppose then that \( \Phi {\left( {t}_{0}\right) }^{-1} \) does not exist. Explain why there exists \( \mathbf{c} \in {\mathbb{F}}^{n} \) such that there is no solution \( \mathbf{x} \) to the equation \( \mathbf{c} = \Phi \left( {t}_{0}\right) \mathbf{x} \) . By the existence part of Problem 38 there exists a solution to\n\n\[{\mathbf{x}}^{\prime } = A\mathbf{x},\mathbf{x}\left( {t}_{0}\right) = \mathbf{c}\]\n\nbut this cannot be in the form \( \Phi \left( t\right) \mathbf{c} \) . Thus for every \( t,\Phi {\left( t\right) }^{-1} \) exists. Next suppose for some \( {t}_{0},\Phi {\left( {t}_{0}\right) }^{-1} \) exists. Let \( {\mathbf{z}}^{\prime } = A\mathbf{z} \) and choose \( \mathbf{c} \) such that\n\n\[\mathbf{z}\left( {t}_{0}\right) = \Phi \left( {t}_{0}\right) \mathbf{c}\]\n\nThen both \( \mathbf{z}\left( t\right) ,\Phi \left( t\right) \mathbf{c} \) solve\n\n\[{\mathbf{x}}^{\prime } = A\mathbf{x},\mathbf{x}\left( {t}_{0}\right) = \mathbf{z}\left( {t}_{0}\right)\]\n\nApply uniqueness to conclude \( \mathbf{z} = \Phi \left( t\right) \mathbf{c} \) . Finally, consider that \( \Phi \left( t\right) \mathbf{c} \) for \( \mathbf{c} \in {\mathbb{F}}^{n} \) either is the general solution or it is not the general solution. If it is, then \( \Phi {\left( t\right) }^{-1} \) exists for all \( t \) . If it is not, then \( \Phi {\left( t\right) }^{-1} \) cannot exist for any \( t \) from what was just shown.
No
Corollary 8.2.5 If \( \left\{ {{\mathbf{u}}_{1},\cdots ,{\mathbf{u}}_{m}}\right\} \) and \( \left\{ {{\mathbf{v}}_{1},\cdots ,{\mathbf{v}}_{n}}\right\} \) are two bases for \( V \), then \( m = n \) .
Proof: By Theorem 8.2.4, \( m \leq n \) and \( n \leq m \) . ∎
Yes
Example 8.2.7 Consider the polynomials defined on \( \mathbb{R} \) of degree no more than 3, denoted here as \( {P}_{3} \) . Then show that a basis for \( {P}_{3} \) is \( \left\{ {1, x,{x}^{2},{x}^{3}}\right\} \) . Here \( {x}^{k} \) symbolizes the function \( x \mapsto {x}^{k} \) .
It is obvious that the span of the given vectors yields \( {P}_{3} \) . Why is this set of vectors linearly independent? Suppose\n\n\[ \n{c}_{0} + {c}_{1}x + {c}_{2}{x}^{2} + {c}_{3}{x}^{3} = 0 \n\]\n\nwhere 0 is the zero function which maps everything to 0 . Then you could differentiate three times and obtain the following equations\n\n\[ \n{c}_{1} + 2{c}_{2}x + 3{c}_{3}{x}^{2} = 0 \n\]\n\n\[ \n2{c}_{2} + 6{c}_{3}x = 0 \n\]\n\n\[ \n6{c}_{3} = 0 \n\]\n\nNow this implies \( {c}_{3} = 0 \) . Then from the equations above the bottom one, you find in succession that \( {c}_{2} = 0,{c}_{1} = 0,{c}_{0} = 0 \) .
Yes
Theorem 8.2.9 Let \( \\left\\{ {{f}_{1},\\cdots ,{f}_{n}}\\right\\} \) be smooth functions defined on \( \\left\\lbrack {a, b}\\right\\rbrack \) . Then they are linearly independent if there exists some point \( t \\in \\left\\lbrack {a, b}\\right\\rbrack \) where \( W\\left( {{f}_{1},\\cdots ,{f}_{n}}\\right\\) \\left( t\\right\\) \\neq 0 \) .
Proof: Form the linear combination of these vectors (functions) and suppose it equals 0 . Thus\n\n\[ \n{a}_{1}{f}_{1} + {a}_{2}{f}_{2} + \\cdots + {a}_{n}{f}_{n} = 0 \n\]\n\nThe question you must answer is whether this requires each \( {a}_{j} \) to equal zero. If they all must equal 0 , then this means these vectors (functions) are independent. This is what it means to be linearly independent.\n\nDifferentiate the above equation \( n - 1 \) times yielding the equations\n\n\[ \n\\left( \\begin{matrix} {a}_{1}{f}_{1} + {a}_{2}{f}_{2} + \\cdots + {a}_{n}{f}_{n} = 0 \\ {a}_{1}{f}_{1}^{\\prime } + {a}_{2}{f}_{2}^{\\prime } + \\cdots + {a}_{n}{f}_{n}^{\\prime } = 0 \\ \\vdots \\ {a}_{1}{f}_{1}^{\\left( n - 1\\right) } + {a}_{2}{f}_{2}^{\\left( n - 1\\right) } + \\cdots + {a}_{n}{f}_{n}^{\\left( n - 1\\right) } = 0 \\end{matrix}\\right) \n\]\n\nNow plug in \( t \) . Then the above yields\n\n\[ \n\\left( \\begin{matrix} {f}_{1}\\left( t\\right) & {f}_{2}\\left( t\\right) & \\cdots & {f}_{n}\\left( t\\right) \\ {f}_{1}^{\\prime }\\left( t\\right) & {f}_{2}^{\\prime }\\left( t\\right) & \\cdots & {f}_{n}^{\\prime }\\left( t\\right) \\ \\vdots & \\vdots & & \\vdots \\ {f}_{1}^{\\left( n - 1\\right) }\\left( t\\right) & {f}_{2}^{\\left( n - 1\\right) }\\left( t\\right) & \\cdots & {f}_{n}^{\\left( n - 1\\right) }\\left( t\\right) \\end{matrix}\\right) \\left( \\begin{matrix} {a}_{1} \\ {a}_{2} \\ \\vdots \\ {a}_{n} \\end{matrix}\\right) = \\left( \\begin{matrix} 0 \\ 0 \\ \\vdots \\ 0 \\end{matrix}\\right) \n\]\n\nSince the determinant of the matrix on the left is assumed to be nonzero, it follows this matrix has an inverse and so the only solution to the above system of equations is to have each \( {a}_{k} = 0 \) . \( \\blacksquare \)
Yes
Lemma 8.2.10 Suppose \( \mathbf{v} \notin \operatorname{span}\left( {{\mathbf{u}}_{1},\cdots ,{\mathbf{u}}_{k}}\right) \) and \( \left\{ {{\mathbf{u}}_{1},\cdots ,{\mathbf{u}}_{k}}\right\} \) is linearly independent. Then \( \left\{ {{\mathbf{u}}_{1},\cdots ,{\mathbf{u}}_{k},\mathbf{v}}\right\} \) is also linearly independent.
Proof: Suppose \( \mathop{\sum }\limits_{{i = 1}}^{k}{c}_{i}{\mathbf{u}}_{i} + d\mathbf{v} = 0 \) . It is required to verify that each \( {c}_{i} = 0 \) and that \( d = 0 \) . But if \( d \neq 0 \), then you can solve for \( \mathbf{v} \) as a linear combination of the vectors, \( \left\{ {{\mathbf{u}}_{1},\cdots ,{\mathbf{u}}_{k}}\right\} \n\n\[ \mathbf{v} = - \mathop{\sum }\limits_{{i = 1}}^{k}\left( \frac{{c}_{i}}{d}\right) {\mathbf{u}}_{i} \]\n\ncontrary to assumption. Therefore, \( d = 0 \) . But then \( \mathop{\sum }\limits_{{i = 1}}^{k}{c}_{i}{\mathbf{u}}_{i} = 0 \) and the linear independence of \( \left\{ {{\mathbf{u}}_{1},\cdots ,{\mathbf{u}}_{k}}\right\} \) implies each \( {c}_{i} = 0 \) also.
Yes
If \( V = \operatorname{span}\left( {{\mathbf{u}}_{1},\cdots ,{\mathbf{u}}_{n}}\right) \) then some subset of \( \left\{ {{\mathbf{u}}_{1},\cdots ,{\mathbf{u}}_{n}}\right\} \) is a basis for V.
Let\n\n\[ S = \left\{ {E \subseteq \left\{ {{\mathbf{u}}_{1},\cdots ,{\mathbf{u}}_{n}}\right\} \text{ such that }\operatorname{span}\left( E\right) = V}\right\} .\n\]\n\nFor \( E \in S \), let \( \left| E\right| \) denote the number of elements of \( E \) . Let\n\n\[ m \equiv \min \{ \left| E\right| \text{ such that }E \in S\} .\n\]\n\nThus there exist vectors\n\n\[ \left\{ {{\mathbf{v}}_{1},\cdots ,{\mathbf{v}}_{m}}\right\} \subseteq \left\{ {{\mathbf{u}}_{1},\cdots ,{\mathbf{u}}_{n}}\right\}\n\]\n\nsuch that\n\n\[ \operatorname{span}\left( {{\mathbf{v}}_{1},\cdots ,{\mathbf{v}}_{m}}\right) = V\n\]\n\nand \( m \) is as small as possible for this to happen. If this set is linearly independent, it follows it is a basis for \( V \) and the theorem is proved. On the other hand, if the set is not linearly independent, then there exist scalars\n\n\[ {c}_{1},\cdots ,{c}_{m}\n\]\nsuch that\n\n\[ \mathbf{0} = \mathop{\sum }\limits_{{i = 1}}^{m}{c}_{i}{\mathbf{v}}_{i}\n\]\n\nand not all the \( {c}_{i} \) are equal to zero. Suppose \( {c}_{k} \neq 0 \) . Then the vector, \( {\mathbf{v}}_{k} \) may be solved for in terms of the other vectors. Consequently,\n\n\[ V = \operatorname{span}\left( {{\mathbf{v}}_{1},\cdots ,{\mathbf{v}}_{k - 1},{\mathbf{v}}_{k + 1},\cdots ,{\mathbf{v}}_{m}}\right)\n\]\n\ncontradicting the definition of \( m \) . This proves the first part of the theorem.
Yes
Theorem 8.2.12 Let \( V \) be a nonzero subspace of a finite dimensional vector space, \( W \) of dimension, \( n \) . Then \( V \) has a basis with no more than \( n \) vectors.
Proof: Let \( {\mathbf{v}}_{1} \in V \) where \( {\mathbf{v}}_{1} \neq 0 \) . If \( \operatorname{span}\left\{ {\mathbf{v}}_{1}\right\} = V \), stop. \( \left\{ {\mathbf{v}}_{1}\right\} \) is a basis for \( V \) . Otherwise, there exists \( {\mathbf{v}}_{2} \in V \) which is not in span \( \left\{ {\mathbf{v}}_{1}\right\} \) . By Lemma 8.2.10 \( \left\{ {{\mathbf{v}}_{1},{\mathbf{v}}_{2}}\right\} \) is a linearly independent set of vectors. If \( \operatorname{span}\left\{ {{\mathbf{v}}_{1},{\mathbf{v}}_{2}}\right\} = V \) stop, \( \left\{ {{\mathbf{v}}_{1},{\mathbf{v}}_{2}}\right\} \) is a basis for \( V \) . If \( \operatorname{span}\left\{ {{\mathbf{v}}_{1},{\mathbf{v}}_{2}}\right\} \neq V \), then there exists \( {\mathbf{v}}_{3} \notin \operatorname{span}\left\{ {{\mathbf{v}}_{1},{\mathbf{v}}_{2}}\right\} \) and \( \left\{ {{\mathbf{v}}_{1},{\mathbf{v}}_{2},{\mathbf{v}}_{3}}\right\} \) is a larger linearly independent set of vectors. Continuing this way, the process must stop before \( n + 1 \) steps because if not, it would be possible to obtain \( n + 1 \) linearly independent vectors contrary to the exchange theorem, Theorem 8.2.4.
Yes
Lemma 8.3.3 Let \( f\left( \lambda \right) \) and \( g\left( \lambda \right) \neq 0 \) be polynomials. Then there exists a polynomial, \( q\left( \lambda \right) \) such that\n\n\[ f\left( \lambda \right) = q\left( \lambda \right) g\left( \lambda \right) + r\left( \lambda \right) \]\n\nwhere the degree of \( r\left( \lambda \right) \) is less than the degree of \( g\left( \lambda \right) \) or \( r\left( \lambda \right) = 0 \) .
Proof: Consider the polynomials of the form \( f\left( \lambda \right) - g\left( \lambda \right) l\left( \lambda \right) \) and out of all these polynomials, pick one which has the smallest degree. This can be done because of the well ordering of the natural numbers. Let this take place when \( l\left( \lambda \right) = {q}_{1}\left( \lambda \right) \) and let\n\n\[ r\left( \lambda \right) = f\left( \lambda \right) - g\left( \lambda \right) {q}_{1}\left( \lambda \right) .\n\nIt is required to show degree of \( r\left( \lambda \right) < \) degree of \( g\left( \lambda \right) \) or else \( r\left( \lambda \right) = 0 \) .\n\nSuppose \( f\left( \lambda \right) - g\left( \lambda \right) l\left( \lambda \right) \) is never equal to zero for any \( l\left( \lambda \right) \) . Then \( r\left( \lambda \right) \neq 0 \) . It is required to show the degree of \( r\left( \lambda \right) \) is smaller than the degree of \( g\left( \lambda \right) \) . If this doesn’t happen, then the degree of \( r \geq \) the degree of \( g \) . Let\n\n\[ r\left( \lambda \right) = {b}_{m}{\lambda }^{m} + \cdots + {b}_{1}\lambda + {b}_{0} \]\n\n\[ g\left( \lambda \right) = {a}_{n}{\lambda }^{n} + \cdots + {a}_{1}\lambda + {a}_{0} \]\n\nwhere \( m \geq n \) and \( {b}_{m} \) and \( {a}_{n} \) are nonzero. Then let \( {r}_{1}\left( \lambda \right) \) be given by\n\n\[ {r}_{1}\left( \lambda \right) = r\left( \lambda \right) - \frac{{\lambda }^{m - n}{b}_{m}}{{a}_{n}}g\left( \lambda \right) \]\n\n\[ = \left( {{b}_{m}{\lambda }^{m} + \cdots + {b}_{1}\lambda + {b}_{0}}\right) - \frac{{\lambda }^{m - n}{b}_{m}}{{a}_{n}}\left( {{a}_{n}{\lambda }^{n} + \cdots + {a}_{1}\lambda + {a}_{0}}\right) \]\n\nwhich has smaller degree than \( m \), the degree of \( r\left( \lambda \right) \) . But\n\n\[ {r}_{1}\left( \lambda \right) = \overset{r\left( \lambda \right) }{\overbrace{f\left( \lambda \right) - g\left( \lambda \right) {q}_{1}\left( \lambda \right) }} - \frac{{\lambda }^{m - n}{b}_{m}}{{a}_{n}}g\left( \lambda \right) \]\n\n\[ = f\left( \lambda \right) - g\left( \lambda \right) \left( {{q}_{1}\left( \lambda \right) + \frac{{\lambda }^{m - n}{b}_{m}}{{a}_{n}}}\right) ,\]\nand this is not zero by the assumption that \( f\left( \lambda \right) - g\left( \lambda \right) l\left( \lambda \right) \) is never equal to zero for any \( l\left( \lambda \right) \) yet has smaller degree than \( r\left( \lambda \right) \) which is a contradiction to the choice of \( r\left( \lambda \right) \) .
Yes
Proposition 8.3.5 The greatest common divisor is unique.
Proof: Suppose both \( q\left( \lambda \right) \) and \( {q}^{\prime }\left( \lambda \right) \) work. Then \( q\left( \lambda \right) \) divides \( {q}^{\prime }\left( \lambda \right) \) and the other way around and so\n\n\[ \n{q}^{\prime }\left( \lambda \right) = q\left( \lambda \right) l\left( \lambda \right), q\left( \lambda \right) = {l}^{\prime }\left( \lambda \right) {q}^{\prime }\left( \lambda \right) \n\]\n\nTherefore, the two must have the same degree. Hence \( {l}^{\prime }\left( \lambda \right), l\left( \lambda \right) \) are both constants. However, this constant must be 1 because both \( q\left( \lambda \right) \) and \( {q}^{\prime }\left( \lambda \right) \) are monic. ∎
Yes
Theorem 8.3.6 Let \( \psi \left( \lambda \right) \) be the greatest common divisor of \( \left\{ {{\phi }_{i}\left( \lambda \right) }\right\} \), not all of which are zero polynomials. Then there exist polynomials \( {r}_{i}\left( \lambda \right) \) such that\n\n\[ \psi \left( \lambda \right) = \mathop{\sum }\limits_{{i = 1}}^{p}{r}_{i}\left( \lambda \right) {\phi }_{i}\left( \lambda \right) \]\n\nFurthermore, \( \psi \left( \lambda \right) \) is the monic polynomial of smallest degree which can be written in the above form.
Proof: Let \( S \) denote the set of monic polynomials which are of the form\n\n\[ \mathop{\sum }\limits_{{i = 1}}^{p}{r}_{i}\left( \lambda \right) {\phi }_{i}\left( \lambda \right) \]\n\nwhere \( {r}_{i}\left( \lambda \right) \) is a polynomial. Then \( S \neq \varnothing \) because some \( {\phi }_{i}\left( \lambda \right) \neq 0 \) . Then let the \( {r}_{i} \) be chosen such that the degree of the expression \( \mathop{\sum }\limits_{{i = 1}}^{p}{r}_{i}\left( \lambda \right) {\phi }_{i}\left( \lambda \right) \) is as small as possible. Letting \( \psi \left( \lambda \right) \) equal this sum, it remains to verify it is the greatest common divisor. First, does it divide each \( {\phi }_{i}\left( \lambda \right) \) ? Suppose it fails to divide \( {\phi }_{1}\left( \lambda \right) \) . Then by Lemma 8.3.3\n\n\[ {\phi }_{1}\left( \lambda \right) = \psi \left( \lambda \right) l\left( \lambda \right) + r\left( \lambda \right) \]\n\nwhere degree of \( r\left( \lambda \right) \) is less than that of \( \psi \left( \lambda \right) \) . Then dividing \( r\left( \lambda \right) \) by the leading coefficient if necessary and denoting the result by \( {\psi }_{1}\left( \lambda \right) \), it follows the degree of \( {\psi }_{1}\left( \lambda \right) \) is less than the degree of \( \psi \left( \lambda \right) \) and \( {\psi }_{1}\left( \lambda \right) \) equals\n\n\[ {\psi }_{1}\left( \lambda \right) = \left( {{\phi }_{1}\left( \lambda \right) - \psi \left( \lambda \right) l\left( \lambda \right) }\right) a \]\n\n\[ = \left( {{\phi }_{1}\left( \lambda \right) - \mathop{\sum }\limits_{{i = 1}}^{p}{r}_{i}\left( \lambda \right) {\phi }_{i}\left( \lambda \right) l\left( \lambda \right) }\right) a \]\n\n\[ = \left( {\left( {1 - {r}_{1}\left( \lambda \right) }\right) {\phi }_{1}\left( \lambda \right) + \mathop{\sum }\limits_{{i = 2}}^{p}\left( {-{r}_{i}\left( \lambda \right) l\left( \lambda \right) }\right) {\phi }_{i}\left( \lambda \right) }\right) a \]\n\nfor a suitable \( a \in \mathbb{F} \) . This is one of the polynomials in \( S \) . Therefore, \( \psi \left( \lambda \right) \) does not have the smallest degree after all because the degree of \( {\psi }_{1}\left( \lambda \right) \) is smaller. This is a contradiction. Therefore, \( \psi \left( \lambda \right) \) divides \( {\phi }_{1}\left( \lambda \right) \) . Similarly it divides all the other \( {\phi }_{i}\left( \lambda \right) \) .\n\nIf \( p\left( \lambda \right) \) divides all the \( {\phi }_{i}\left( \lambda \right) \), then it divides \( \psi \left( \lambda \right) \) because of the formula for \( \psi \left( \lambda \right) \) which equals \( \mathop{\sum }\limits_{{i = 1}}^{p}{r}_{i}\left( \lambda \right) {\phi }_{i}\left( \lambda \right) \) . ∎
Yes
Lemma 8.3.7 Suppose \( \phi \left( \lambda \right) \) and \( \psi \left( \lambda \right) \) are monic polynomials which are irreducible and not equal. Then they are relatively prime.
Proof: Suppose \( \eta \left( \lambda \right) \) is a nonconstant polynomial. If \( \eta \left( \lambda \right) \) divides \( \phi \left( \lambda \right) \), then since \( \phi \left( \lambda \right) \) is irreducible, \( \eta \left( \lambda \right) \) equals \( {a\phi }\left( \lambda \right) \) for some \( a \in \mathbb{F} \) . If \( \eta \left( \lambda \right) \) divides \( \psi \left( \lambda \right) \) then it must be of the form \( {b\psi }\left( \lambda \right) \) for some \( b \in \mathbb{F} \) and so it follows\n\n\[ \psi \left( \lambda \right) = \frac{a}{b}\phi \left( \lambda \right) \]\n\nbut both \( \psi \left( \lambda \right) \) and \( \phi \left( \lambda \right) \) are monic polynomials which implies \( a = b \) and so \( \psi \left( \lambda \right) = \phi \left( \lambda \right) \) . This is assumed not to happen. It follows the only polynomials which divide both \( \psi \left( \lambda \right) \) and \( \phi \left( \lambda \right) \) are constants and so the two polynomials are relatively prime. Thus a polynomial which divides them both must be a constant, and if it is monic, then it must be 1 . Thus 1 is the greatest common divisor.
Yes
Lemma 8.3.8 Let \( \psi \left( \lambda \right) \) be an irreducible monic polynomial not equal to 1 which divides \[ \mathop{\prod }\limits_{{i = 1}}^{p}{\phi }_{i}{\left( \lambda \right) }^{{k}_{i}},{k}_{i}\text{ a positive integer,} \] where each \( {\phi }_{i}\left( \lambda \right) \) is an irreducible monic polynomial. Then \( \psi \left( \lambda \right) \) equals some \( {\phi }_{i}\left( \lambda \right) \) .
Proof : Suppose \( \psi \left( \lambda \right) \neq {\phi }_{i}\left( \lambda \right) \) for all \( i \) . Then by Lemma 8.3.7, there exist polynomials \( {m}_{i}\left( \lambda \right) ,{n}_{i}\left( \lambda \right) \) such that \[ 1 = \psi \left( \lambda \right) {m}_{i}\left( \lambda \right) + {\phi }_{i}\left( \lambda \right) {n}_{i}\left( \lambda \right) \] Hence \[ {\left( {\phi }_{i}\left( \lambda \right) {n}_{i}\left( \lambda \right) \right) }^{{k}_{i}} = {\left( 1 - \psi \left( \lambda \right) {m}_{i}\left( \lambda \right) \right) }^{{k}_{i}} \] Then, letting \( \widetilde{g}\left( \lambda \right) = \mathop{\prod }\limits_{{i = 1}}^{p}{n}_{i}{\left( \lambda \right) }^{{k}_{i}} \), and applying the binomial theorem, there exists a polynomial \( h\left( \lambda \right) \) such that \[ \widetilde{g}\left( \lambda \right) \mathop{\prod }\limits_{{i = 1}}^{p}{\phi }_{i}{\left( \lambda \right) }^{{k}_{i}} \equiv \mathop{\prod }\limits_{{i = 1}}^{p}{n}_{i}{\left( \lambda \right) }^{{k}_{i}}\mathop{\prod }\limits_{{i = 1}}^{p}{\phi }_{i}{\left( \lambda \right) }^{{k}_{i}} \] \[ = \mathop{\prod }\limits_{{i = 1}}^{p}{\left( 1 - \psi \left( \lambda \right) {m}_{i}\left( \lambda \right) \right) }^{{k}_{i}} = 1 + \psi \left( \lambda \right) h\left( \lambda \right) \] Thus, using the fact that \( \psi \left( \lambda \right) \) divides \( \mathop{\prod }\limits_{{i = 1}}^{p}{\phi }_{i}{\left( \lambda \right) }^{{k}_{i}} \), for a suitable polynomial \( g\left( \lambda \right) \) , \[ g\left( \lambda \right) \psi \left( \lambda \right) = 1 + \psi \left( \lambda \right) h\left( \lambda \right) \] \[ 1 = \psi \left( \lambda \right) \left( {h\left( \lambda \right) - g\left( \lambda \right) }\right) \] which is impossible if \( \psi \left( \lambda \right) \) is non constant, as assumed. ∎
Yes
Lemma 8.3.9 Suppose \( p\left( \lambda \right) \) is a monic polynomial and \( q\left( \lambda \right) \) is a polynomial such that\n\n\[ p\left( \lambda \right) q\left( \lambda \right) = 0. \]\n\nThen \( q\left( \lambda \right) = 0 \) . Also if\n\n\[ p\left( \lambda \right) {q}_{1}\left( \lambda \right) = p\left( \lambda \right) {q}_{2}\left( \lambda \right) \]\n\nthen \( {q}_{1}\left( \lambda \right) = {q}_{2}\left( \lambda \right) \) .
Proof: Let\n\n\[ p\left( \lambda \right) = \mathop{\sum }\limits_{{j = 1}}^{k}{p}_{j}{\lambda }^{j}, q\left( \lambda \right) = \mathop{\sum }\limits_{{i = 1}}^{n}{q}_{i}{\lambda }^{i},{p}_{k} = 1. \]\n\nThen the product equals\n\n\[ \mathop{\sum }\limits_{{j = 1}}^{k}\mathop{\sum }\limits_{{i = 1}}^{n}{p}_{j}{q}_{i}{\lambda }^{i + j} \]\n\nThen look at those terms involving \( {\lambda }^{k + n} \) . This is \( {p}_{k}{q}_{n}{\lambda }^{k + n} \) and is given to be 0 . Since \( {p}_{k} = 1 \), it follows \( {q}_{n} = 0 \) . Thus\n\n\[ \mathop{\sum }\limits_{{j = 1}}^{k}\mathop{\sum }\limits_{{i = 1}}^{{n - 1}}{p}_{j}{q}_{i}{\lambda }^{i + j} = 0. \]\n\nThen consider the term involving \( {\lambda }^{n - 1 + k} \) and conclude that since \( {p}_{k} = 1 \), it follows \( {q}_{n - 1} = 0 \) . Continuing this way, each \( {q}_{i} = 0 \) . This proves the first part. The second follows from\n\n\[ p\left( \lambda \right) \left( {{q}_{1}\left( \lambda \right) - {q}_{2}\left( \lambda \right) }\right) = 0\text{. ∎} \]
Yes
Theorem 8.3.10 Let \( f\left( \lambda \right) \) be a nonconstant polynomial with coefficients in \( \mathbb{F} \) . Then there is some \( a \in \mathbb{F} \) such that \( f\left( \lambda \right) = a\mathop{\prod }\limits_{{i = 1}}^{n}{\phi }_{i}\left( \lambda \right) \) where \( {\phi }_{i}\left( \lambda \right) \) is an irreducible nonconstant monic polynomial and repeats are allowed. Furthermore, this factorization is unique in the sense that any two of these factorizations have the same nonconstant factors in the product, possibly in different order and the same constant a.
Proof: That such a factorization exists is obvious. If \( f\left( \lambda \right) \) is irreducible, you are done. Factor out the leading coefficient. If not, then \( f\left( \lambda \right) = a{\phi }_{1}\left( \lambda \right) {\phi }_{2}\left( \lambda \right) \) where these are monic polynomials. Continue doing this with the \( {\phi }_{i} \) and eventually arrive at a factorization of the desired form.\n\nIt remains to argue the factorization is unique except for order of the factors. Suppose\n\n\[ a\mathop{\prod }\limits_{{i = 1}}^{n}{\phi }_{i}\left( \lambda \right) = b\mathop{\prod }\limits_{{i = 1}}^{m}{\psi }_{i}\left( \lambda \right) \]\n\nwhere the \( {\phi }_{i}\left( \lambda \right) \) and the \( {\psi }_{i}\left( \lambda \right) \) are all irreducible monic nonconstant polynomials and \( a, b \in \) \( \mathbb{F} \) . If \( n > m \), then by Lemma \( \mathbb{S}{.3}\mathbb{S} \), each \( {\psi }_{i}\left( \lambda \right) \) equals one of the \( {\phi }_{j}\left( \lambda \right) \) . By the above cancellation lemma, Lemma 5.3.9, you can cancel all these \( {\psi }_{i}\left( \lambda \right) \) with appropriate \( {\phi }_{j}\left( \lambda \right) \) and obtain a contradiction because the resulting polynomials on either side would have different degrees. Similarly, it cannot happen that \( n < m \) . It follows \( n = m \) and the two products consist of the same polynomials. Then it follows \( a = b \) .
Yes
Corollary 8.3.11 Let \( q\left( \lambda \right) = \mathop{\prod }\limits_{{i = 1}}^{p}{\phi }_{i}{\left( \lambda \right) }^{{k}_{i}} \) where the \( {k}_{i} \) are positive integers and the \( {\phi }_{i}\left( \lambda \right) \) are irreducible monic polynomials. Suppose also that \( p\left( \lambda \right) \) is a monic polynomial which divides \( q\left( \lambda \right) \) . Then \[ p\left( \lambda \right) = \mathop{\prod }\limits_{{i = 1}}^{p}{\phi }_{i}{\left( \lambda \right) }^{{r}_{i}} \] where \( {r}_{i} \) is a nonnegative integer no larger than \( {k}_{i} \) .
Proof: Using Theorem 8.3.10, let \( p\left( \lambda \right) = b\mathop{\prod }\limits_{{i = 1}}^{s}{\psi }_{i}{\left( \lambda \right) }^{{r}_{i}} \) where the \( {\psi }_{i}\left( \lambda \right) \) are each irreducible and monic and \( b \in \mathbb{F} \) . Since \( p\left( \lambda \right) \) is monic, \( b = 1 \) . Then there exists a polynomial \( g\left( \lambda \right) \) such that \[ p\left( \lambda \right) g\left( \lambda \right) = g\left( \lambda \right) \mathop{\prod }\limits_{{i = 1}}^{s}{\psi }_{i}{\left( \lambda \right) }^{{r}_{i}} = \mathop{\prod }\limits_{{i = 1}}^{p}{\phi }_{i}{\left( \lambda \right) }^{{k}_{i}} \] Hence \( g\left( \lambda \right) \) must be monic. Therefore, \[ p\left( \lambda \right) g\left( \lambda \right) = \overset{p\left( \lambda \right) }{\overbrace{\mathop{\prod }\limits_{{i = 1}}^{s}{\psi }_{i}{\left( \lambda \right) }^{{r}_{i}}}}\mathop{\prod }\limits_{{j = 1}}^{l}{\eta }_{j}\left( \lambda \right) = \mathop{\prod }\limits_{{i = 1}}^{p}{\phi }_{i}{\left( \lambda \right) }^{{k}_{i}} \] for \( {\eta }_{j} \) monic and irreducible. By uniqueness, each \( {\psi }_{i} \) equals one of the \( {\phi }_{j}\left( \lambda \right) \) and the same holding true of the \( {\eta }_{i}\left( \lambda \right) \) . Therefore, \( p\left( \lambda \right) \) is of the desired form.
Yes
Proposition 8.3.16 In the above definition, \( \sim \) is an equivalence relation.
Proof: First of all, note that \( a\left( x\right) \sim a\left( x\right) \) because their difference equals \( {0p}\left( x\right) \) . If \( a\left( x\right) \sim b\left( x\right) \), then \( a\left( x\right) - b\left( x\right) = k\left( x\right) p\left( x\right) \) for some \( k\left( x\right) \) . But then \( b\left( x\right) - a\left( x\right) = \) \( - k\left( x\right) p\left( x\right) \) and so \( b\left( x\right) \sim a\left( x\right) \) . Next suppose \( a\left( x\right) \sim b\left( x\right) \) and \( b\left( x\right) \sim c\left( x\right) \) . Then \( a\left( x\right) - b\left( x\right) = k\left( x\right) p\left( x\right) \) for some polynomial \( k\left( x\right) \) and also \( b\left( x\right) - c\left( x\right) = l\left( x\right) p\left( x\right) \) for some polynomial \( l\left( x\right) \) . Then\n\n\[ a\left( x\right) - c\left( x\right) = a\left( x\right) - b\left( x\right) + b\left( x\right) - c\left( x\right) \]\n\n\[ = k\left( x\right) p\left( x\right) + l\left( x\right) p\left( x\right) = \left( {l\left( x\right) + k\left( x\right) }\right) p\left( x\right) \]\n\nand so \( a\left( x\right) \sim c\left( x\right) \) and this shows the transitive law. ∎
Yes
Proposition 8.3.18 In the situation of Definition 8.3.17, \( p\left( x\right) \) and \( q\left( x\right) \) are relatively prime for any \( q\left( x\right) \in \mathbb{F}\left\lbrack x\right\rbrack \) which is not a multiple of \( p\left( x\right) \) . Also the definitions of addition and multiplication are well defined. In addition, if \( a, b \in \mathbb{F} \) and \( \left\lbrack a\right\rbrack = \left\lbrack b\right\rbrack \), then \( a = b \) .
Proof: First consider the claim about \( p\left( x\right), q\left( x\right) \) being relatively prime. If \( \psi \left( x\right) \) is the greatest common divisor, it follows \( \psi \left( x\right) \) is either equal to \( p\left( x\right) \) or 1 . If it is \( p\left( x\right) \), then \( q\left( x\right) \) is a multiple of \( p\left( x\right) \) . If it is 1, then by definition, the two polynomials are relatively prime.\n\nTo show the operations are well defined, suppose\n\n\[ \left\lbrack {a\left( x\right) }\right\rbrack = \left\lbrack {{a}^{\prime }\left( x\right) }\right\rbrack ,\left\lbrack {b\left( x\right) }\right\rbrack = \left\lbrack {{b}^{\prime }\left( x\right) }\right\rbrack \]\n\nIt is necessary to show\n\n\[ \left\lbrack {a\left( x\right) + b\left( x\right) }\right\rbrack = \left\lbrack {{a}^{\prime }\left( x\right) + {b}^{\prime }\left( x\right) }\right\rbrack \]\n\n\[ \left\lbrack {a\left( x\right) b\left( x\right) }\right\rbrack = \left\lbrack {{a}^{\prime }\left( x\right) {b}^{\prime }\left( x\right) }\right\rbrack \]\n\nConsider the second of the two.\n\n\[ {a}^{\prime }\left( x\right) {b}^{\prime }\left( x\right) - a\left( x\right) b\left( x\right) \]\n\n\[ = {a}^{\prime }\left( x\right) {b}^{\prime }\left( x\right) - a\left( x\right) {b}^{\prime }\left( x\right) + a\left( x\right) {b}^{\prime }\left( x\right) - a\left( x\right) b\left( x\right) \]\n\n\[ = {b}^{\prime }\left( x\right) \left( {{a}^{\prime }\left( x\right) - a\left( x\right) }\right) + a\left( x\right) \left( {{b}^{\prime }\left( x\right) - b\left( x\right) }\right) \]\n\nNow by assumption \( \left( {{a}^{\prime }\left( x\right) - a\left( x\right) }\right) \) is a multiple of \( p\left( x\right) \) as is \( \left( {{b}^{\prime }\left( x\right) - b\left( x\right) }\right) \), so the above is a multiple of \( p\left( x\right) \) and by definition this shows \( \left\lbrack {a\left( x\right) b\left( x\right) }\right\rbrack = \left\lbrack {{a}^{\prime }\left( x\right) {b}^{\prime }\left( x\right) }\right\rbrack \) . The case for addition is similar.\n\nNow suppose \( \left\lbrack a\right\rbrack = \left\lbrack b\right\rbrack \) . This means \( a - b = k\left( x\right) p\left( x\right) \) for some polynomial \( k\left( x\right) \) . Then \( k\left( x\right) \) must equal 0 since otherwise the two polynomials \( a - b \) and \( k\left( x\right) p\left( x\right) \) could not be equal because they would have different degree.
Yes
Proposition 8.3.21 Let \( F \subseteq K \subseteq L \) be fields. Then \( \left\lbrack {L : F}\right\rbrack = \left\lbrack {L : K}\right\rbrack \left\lbrack {K : F}\right\rbrack \) .
Proof: Let \( {\left\{ {l}_{i}\right\} }_{i = 1}^{n} \) be a basis for \( L \) over \( K \) and let \( {\left\{ {k}_{j}\right\} }_{j = 1}^{m} \) be a basis of \( K \) over \( F \) . Then if \( l \in L \), there exist unique scalars \( {x}_{i} \) in \( K \) such that\n\n\[ l = \mathop{\sum }\limits_{{i = 1}}^{n}{x}_{i}{l}_{i} \]\n\nNow \( {x}_{i} \in K \) so there exist \( {f}_{ji} \) such that\n\n\[ {x}_{i} = \mathop{\sum }\limits_{{j = 1}}^{m}{f}_{ji}{k}_{j} \]\n\nThen it follows that\n\n\[ l = \mathop{\sum }\limits_{{i = 1}}^{n}\mathop{\sum }\limits_{{j = 1}}^{m}{f}_{ji}{k}_{j}{l}_{i} \]\n\nIt follows that \( \left\{ {{k}_{j}{l}_{i}}\right\} \) is a spanning set. If\n\n\[ \mathop{\sum }\limits_{{i = 1}}^{n}\mathop{\sum }\limits_{{j = 1}}^{m}{f}_{ji}{k}_{j}{l}_{i} = 0 \]\n\nThen, since the \( {l}_{i} \) are independent, it follows that\n\n\[ \mathop{\sum }\limits_{{j = 1}}^{m}{f}_{ji}{k}_{j} = 0 \]\n\nand since \( \left\{ {k}_{j}\right\} \) is independent, each \( {f}_{ji} = 0 \) for each \( j \) for a given arbitrary \( i \) . Therefore, \( \left\{ {{k}_{j}{l}_{i}}\right\} \) is a basis.
Yes
The set of all equivalence classes \( \mathbb{G} \equiv \mathbb{F}/\left( {p\left( x\right) }\right) \) described above with the multiplicative identity given by [1] and the additive identity given by [0] along with the operations of Definition 8.3.17, is a field and \( p\left( \left\lbrack x\right\rbrack \right) = \left\lbrack 0\right\rbrack \) . (Thus \( p \) has a root in this new field.) In addition to this, \( \left\lbrack {\mathbb{G} : \mathbb{F}}\right\rbrack = n \), the degree of \( p\left( x\right) \) .
Proof: Everything is obvious except for the existence of the multiplicative inverse and the assertion that \( p\left( \left\lbrack x\right\rbrack \right) = 0 \) . Suppose then that \( \left\lbrack {a\left( x\right) }\right\rbrack \neq \left\lbrack 0\right\rbrack \) . That is, \( a\left( x\right) \) is not a multiple of \( p\left( x\right) \) . Why does \( {\left\lbrack a\left( x\right) \right\rbrack }^{-1} \) exist? By Theorem 8.3.6, \( a\left( x\right), p\left( x\right) \) are relatively prime and so there exist polynomials \( \psi \left( x\right) ,\phi \left( x\right) \) such that\n\n\[ 1 = \psi \left( x\right) p\left( x\right) + a\left( x\right) \phi \left( x\right) \]\n\nand so\n\n\[ 1 - a\left( x\right) \phi \left( x\right) = \psi \left( x\right) p\left( x\right) \]\n\nwhich, by definition implies\n\n\[ \left\lbrack {1 - a\left( x\right) \phi \left( x\right) }\right\rbrack = \left\lbrack 1\right\rbrack - \left\lbrack {a\left( x\right) \phi \left( x\right) }\right\rbrack = \left\lbrack 1\right\rbrack - \left\lbrack {a\left( x\right) }\right\rbrack \left\lbrack {\phi \left( x\right) }\right\rbrack = \left\lbrack 0\right\rbrack \]\n\nand so \( \left\lbrack {\phi \left( x\right) }\right\rbrack = {\left\lbrack a\left( x\right) \right\rbrack }^{-1} \) . This shows \( \mathbb{G} \) is a field.\n\nNow if \( p\left( x\right) = {a}_{n}{x}^{n} + {a}_{n - 1}{x}^{n - 1} + \cdots + {a}_{1}x + {a}_{0}, p\left( \left\lbrack x\right\rbrack \right) = 0 \) by 8.7 and the definition which says \( \left\lbrack {p\left( x\right) }\right\rbrack = \left\lbrack 0\right\rbrack \) .\n\nConsider the claim about the dimension. It was just shown that \( \left\lbrack 1\right\rbrack ,\left\lbrack x\right\rbrack ,\left\lbrack {x}^{2}\right\rbrack ,\cdots ,\left\lbrack {x}^{n}\right\rbrack \) is linearly dependent. Also \( \left\lbrack 1\right\rbrack ,\left\lbrack x\right\rbrack ,\left\lbrack {x}^{2}\right\rbrack ,\cdots ,\left\lbrack {x}^{n - 1}\right\rbrack \) is independent because if not, there would exist a polynomial \( q\left( x\right) \) of degree \( n - 1 \) which is a multiple of \( p\left( x\right) \) which is impossible. Now for \( \left\lbrack {q\left( x\right) }\right\rbrack \in \mathbb{G} \), you can write\n\n\[ q\left( x\right) = p\left( x\right) l\left( x\right) + r\left( x\right) \]\n\nwhere the degree of \( r\left( x\right) \) is less than \( n \) or else it equals 0 . Either way, \( \left\lbrack {q\left( x\right) }\right\rbrack = \left\lbrack {r\left( x\right) }\right\rbrack \) which is a linear combination of \( \left\lbrack 1\right\rbrack ,\left\lbrack x\right\rbrack ,\left\lbrack {x}^{2}\right\rbrack ,\cdots ,\left\lbrack {x}^{n - 1}\right\rbrack \) . Thus \( \left\lbrack {\mathbb{G} : \mathbb{F}}\right\rbrack = n \) as claimed. ∎
Yes
Theorem 8.3.23 Let \( p\left( x\right) = {x}^{n} + {a}_{n - 1}{x}^{n - 1} + \cdots + {a}_{1}x + {a}_{0} \) be a polynomial with coefficients in a field of scalars \( \mathbb{F} \) . There exists a larger field \( \mathbb{G} \) such that there exist \( \left\{ {{z}_{1},\cdots ,{z}_{n}}\right\} \) listed according to multiplicity such that\n\n\[ p\left( x\right) = \mathop{\prod }\limits_{{i = 1}}^{n}\left( {x - {z}_{i}}\right) \]\n\nThis larger field is called a splitting field. Furthermore,\n\n\[ \left\lbrack {\mathbb{G} : \mathbb{F}}\right\rbrack \leq n! \]
Proof: From Theorem 8.3.22, there exists a field \( {\mathbb{F}}_{1} \) such that \( p\left( x\right) \) has a root, \( {z}_{1}( = \left\lbrack x\right\rbrack \) if \( p \) is irreducible.) Then by the Euclidean algorithm\n\n\[ p\left( x\right) = \left( {x - {z}_{1}}\right) {q}_{1}\left( x\right) + r \]\n\nwhere \( r \in {\mathbb{F}}_{1} \) . Since \( p\left( {z}_{1}\right) = 0 \), this requires \( r = 0 \) . Now do the same for \( {q}_{1}\left( x\right) \) that was done for \( p\left( x\right) \), enlarging the field to \( {\mathbb{F}}_{2} \) if necessary, such that in this new field\n\n\[ {q}_{1}\left( x\right) = \left( {x - {z}_{2}}\right) {q}_{2}\left( x\right) . \]\n\nand so\n\n\[ p\left( x\right) = \left( {x - {z}_{1}}\right) \left( {x - {z}_{2}}\right) {q}_{2}\left( x\right) \]\n\nAfter \( n \) such extensions, you will have obtained the necessary field \( \mathbb{G} \) .\n\nFinally consider the claim about dimension. By Theorem 8.3.22, there is a larger field \( {\mathbb{G}}_{1} \) such that \( p\left( x\right) \) has a root \( {a}_{1} \) in \( {\mathbb{G}}_{1} \) and \( \left\lbrack {\mathbb{G} : \mathbb{F}}\right\rbrack \leq n \) . Then\n\n\[ p\left( x\right) = \left( {x - {a}_{1}}\right) q\left( x\right) \]\n\nContinue this way until the polynomial equals the product of linear factors. Then by Proposition 8.3.20 applied multiple times, \( \left\lbrack {\mathbb{G} : \mathbb{F}}\right\rbrack \leq n! \) . ∎
Yes
The polynomial \( {x}^{2} + 1 \) is irreducible in \( \mathbb{R}\left( x\right) \), polynomials having real coefficients.
To see this is the case, suppose \( \psi \left( x\right) \) divides \( {x}^{2} + 1 \) . Then\n\n\[ \n{x}^{2} + 1 = \psi \left( x\right) q\left( x\right) \n\]\n\nIf the degree of \( \psi \left( x\right) \) is less than 2, then it must be either a constant or of the form \( {ax} + b \) . In the latter case, \( - b/a \) must be a zero of the right side, hence of the left but \( {x}^{2} + 1 \) has no real zeros. Therefore, the degree of \( \psi \left( x\right) \) must be two and \( q\left( x\right) \) must be a constant. Thus the only polynomial which divides \( {x}^{2} + 1 \) are constants and multiples of \( {x}^{2} + 1 \) . Therefore, this shows \( {x}^{2} + 1 \) is irreducible.
Yes
Proposition 8.3.25 Suppose \( p\left( x\right) \in \mathbb{F}\left\lbrack x\right\rbrack \) is irreducible and has degree \( n \) . Then every element of \( \mathbb{G} = \mathbb{F}\left\lbrack x\right\rbrack /\left( {p\left( x\right) }\right) \) is of the form \( \left\lbrack 0\right\rbrack \) or \( \left\lbrack {r\left( x\right) }\right\rbrack \) where the degree of \( r\left( x\right) \) is less than \( n \) .
Proof: This follows right away from the Euclidean algorithm for polynomials. If \( k\left( x\right) \) has degree larger than \( n - 1 \), then\n\n\[ k\left( x\right) = q\left( x\right) p\left( x\right) + r\left( x\right) \]\n\nwhere \( r\left( x\right) \) is either equal to 0 or has degree less than \( n \) . Hence\n\n\[ \left\lbrack {k\left( x\right) }\right\rbrack = \left\lbrack {r\left( x\right) }\right\rbrack \text{.}\blacksquare \]
Yes