Q
stringlengths
4
3.96k
A
stringlengths
1
3k
Result
stringclasses
4 values
Example 8.3.26 In the situation of the above example, find \( {\left\lbrack ax + b\right\rbrack }^{-1} \) assuming \( {a}^{2} + {b}^{2} \neq \) 0 . Note this includes all cases of interest thanks to the above proposition.
You can do it with partial fractions as above.\n\n\[ \frac{1}{\left( {{x}^{2} + 1}\right) \left( {{ax} + b}\right) } = \frac{b - {ax}}{\left( {{a}^{2} + {b}^{2}}\right) \left( {{x}^{2} + 1}\right) } + \frac{{a}^{2}}{\left( {{a}^{2} + {b}^{2}}\right) \left( {{ax} + b}\right) } \]\n\nand so\n\n\[ 1 = \frac{1}{{a}^{2} + {b}^{2}}\left( {b - {ax}}\right) \left( {{ax} + b}\right) + \frac{{a}^{2}}{\left( {a}^{2} + {b}^{2}\right) }\left( {{x}^{2} + 1}\right) \]\n\nThus\n\n\[ \frac{1}{{a}^{2} + {b}^{2}}\left( {b - {ax}}\right) \left( {{ax} + b}\right) \sim 1 \]\n\nand so\n\[ {\left\lbrack ax + b\right\rbrack }^{-1} = \frac{\left\lbrack \left( b - ax\right) \right\rbrack }{{a}^{2} + {b}^{2}} = \frac{b - a\left\lbrack x\right\rbrack }{{a}^{2} + {b}^{2}} \]
Yes
Theorem 8.3.28 Let \( a \in \mathbb{A} \) . Then there exists a unique monic irreducible polynomial \( p\left( x\right) \) having coefficients in \( \mathbb{F} \) such that \( p\left( a\right) = 0 \) . This is called the minimal polynomial for a.
Proof: By definition, there exists a polynomial \( q\left( x\right) \) having coefficients in \( \mathbb{F} \) such that \( q\left( a\right) = 0 \) . If \( q\left( x\right) \) is irreducible, divide by the leading coefficient and this proves the existence. If \( q\left( x\right) \) is not irreducible, then there exist nonconstant polynomials \( r\left( x\right) \) and \( k\left( x\right) \) such that \( q\left( x\right) = r\left( x\right) k\left( x\right) \) . Then one of \( r\left( a\right), k\left( a\right) \) equals 0 . Pick the one which equals zero and let it play the role of \( q\left( x\right) \) . Continuing this way, in finitely many steps one obtains an irreducible polynomial \( p\left( x\right) \) such that \( p\left( a\right) = 0 \) . Now divide by the leading coefficient and this proves existence. Suppose \( {p}_{i}, i = 1,2 \) both work and they are not equal. Then by Lemma 8.3.7 they must be relatively prime because they are both assumed to be irreducible and so there exist polynomials \( l\left( x\right), k\left( x\right) \) such that \[ 1 = l\left( x\right) {p}_{1}\left( x\right) + k\left( x\right) {p}_{2}\left( x\right) \] But now when \( a \) is substituted for \( x \), this yields \( 0 = 1 \), a contradiction. The polynomials are equal after all.
Yes
Proposition 8.3.31 Let \( \\left\\{ {{a}_{1},\\cdots ,{a}_{m}}\\right\\} \) be algebraic numbers. Then\n\n\[ \n\\dim \\mathbb{F}\\left\\lbrack {{a}_{1},\\cdots ,{a}_{m}}\\right\\rbrack \\leq \\mathop{\\prod }\\limits_{{j = 1}}^{m}\\deg \\left( {a}_{j}\\right)\n\]\n\nand for an algebraic number \( a \) ,\n\n\[ \n\\dim \\mathbb{F}\\left\\lbrack a\\right\\rbrack = \\deg \\left( a\\right)\n\]\n\nEvery element of \( \\mathbb{F}\\left\\lbrack {{a}_{1},\\cdots ,{a}_{m}}\\right\\rbrack \) is in \( \\mathbb{A} \) and \( \\mathbb{F}\\left\\lbrack {{a}_{1},\\cdots ,{a}_{m}}\\right\\rbrack \) is a field.
Proof: First consider the second assertion. Let the minimal polynomial of \( a \) be\n\n\[ \np\\left( x\\right) = {x}^{n} + {a}_{n - 1}{x}^{n - 1} + \\cdots + {a}_{1}x + {a}_{0}.\n\]\n\nSince \( p\\left( a\\right) = 0 \), it follows \( \\left\\{ {1, a,{a}^{2},\\cdots ,{a}^{n}}\\right\\} \) is linearly dependent. However, if the degree of \( q\\left( x\\right) \) is less than the degree of \( p\\left( x\\right) \), then if \( q\\left( x\\right) \) is not a constant, the two must be relatively prime because \( p\\left( x\\right) \) is irreducible and so there exist polynomials \( k\\left( x\\right), l\\left( x\\right) \) such that\n\n\[ \n1 = l\\left( x\\right) q\\left( x\\right) + k\\left( x\\right) p\\left( x\\right)\n\]\n\nand this is a contradiction if \( q\\left( a\\right) = 0 \) because it would imply upon replacing \( x \) with \( a \) that \( 1 = 0 \) . Therefore, no polynomial having degree less than \( n \) can have \( a \) as a root. It follows\n\n\[ \n\\left\\{ {1, a,{a}^{2},\\cdots ,{a}^{n - 1}}\\right\\}\n\]\n\nis linearly independent. Thus \( \\dim \\mathbb{F}\\left\\lbrack a\\right\\rbrack = \\deg \\left( a\\right) = n \) . Here is why this is. If \( q\\left( a\\right) \) is any element of \( \\mathbb{F}\\left\\lbrack a\\right\\rbrack \) ,\n\n\[ \nq\\left( x\\right) = p\\left( x\\right) k\\left( x\\right) + r\\left( x\\right)\n\]\n\nwhere \( \\deg r\\left( x\\right) < \\deg p\\left( x\\right) \) and so \( q\\left( a\\right) = r\\left( a\\right) \) and \( r\\left( a\\right) \\in \\operatorname{span}\\left( {1, a,{a}^{2},\\cdots ,{a}^{n - 1}}\\right) \) .\n\nNow consider the first claim. By definition, \( \\mathbb{F}\\left\\lbrack {{a}_{1},\\cdots ,{a}_{m}}\\right\\rbrack \) is obtained from all linear combinations of \( \\left\\{ {{a}_{1}^{{k}_{1}},{a}_{2}^{{k}_{2}},\\cdots ,{a}_{n}^{{k}_{n}}}\\right\\} \) where the \( {k}_{i} \) are nonnegative integers. From the first part, it suffices to consider only \( {k}_{j} \\leq \\deg \\left( {a}_{j}\\right) \) . Therefore, there exists a spanning set for \( \\mathbb{F}\\left\\lbrack {{a}_{1},\\cdots ,{a}_{m}}\\right\\rbrack \) which has\n\n\[ \n\\mathop{\\prod }\\limits_{{i = 1}}^{m}\\deg \\left( {a}_{i}\\right)\n\]\n\nentries. By Theorem 8.2.4 this proves the first claim.
Yes
Theorem 8.3.32 The algebraic numbers \( \mathbb{A} \), those roots of polynomials in \( \mathbb{F}\left\lbrack x\right\rbrack \) which are in \( \mathbb{G} \), are a field.
Proof: Let \( a \) be an algebraic number and let \( p\left( x\right) \) be its minimal polynomial. Then \( p\left( x\right) \) is of the form\n\n\[ \n{x}^{n} + {a}_{n - 1}{x}^{n - 1} + \cdots + {a}_{1}x + {a}_{0} \n\]\n\nwhere \( {a}_{0} \neq 0 \) . Then plugging in \( a \) yields\n\n\[ \na\frac{\left( {{a}^{n - 1} + {a}_{n - 1}{a}^{n - 2} + \cdots + {a}_{1}}\right) \left( {-1}\right) }{{a}_{0}} = 1. \n\]\n\nand so \( {a}^{-1} = \frac{\left( {{a}^{n - 1} + {a}_{n - 1}{a}^{n - 2} + \cdots + {a}_{1}}\right) \left( {-1}\right) }{{a}_{0}} \in \mathbb{F}\left\lbrack a\right\rbrack \) . By the proposition, every element of \( \mathbb{F}\left\lbrack a\right\rbrack \) is in \( \mathbb{A} \) and this shows that for every element of \( \mathbb{A} \), its inverse is also in \( \mathbb{A} \) . What about products and sums of things in \( \mathbb{A} \) ? Are they still in \( \mathbb{A} \) ? Yes. If \( a, b \in \mathbb{A} \), then both \( a + b \) and \( {ab} \in \mathbb{F}\left\lbrack {a, b}\right\rbrack \) and from the proposition, each element of \( \mathbb{F}\left\lbrack {a, b}\right\rbrack \) is in \( \mathbb{A} \) . \( \blacksquare \)
No
Theorem 8.3.34 Suppose \( {a}_{1},\cdots ,{a}_{n} \) are algebraic numbers and suppose \( {\alpha }_{1},\cdots ,{\alpha }_{n} \) are distinct algebraic numbers. Then\n\n\[ \mathop{\sum }\limits_{{i = 1}}^{n}{a}_{i}{e}^{{\alpha }_{i}} \neq 0 \]\n\nIn other words, the \( \left\{ {{e}^{{\alpha }_{1}},\cdots ,{e}^{{\alpha }_{n}}}\right\} \) are independent as vectors with field of scalars equal to the algebraic numbers.
There is a proof of this in the appendix. It is long and hard but only depends on elementary considerations other than some algebra involving symmetric polynomials. See Theorem E.3.5.
No
Lemma 9.2.2 Let \( V \) and \( W \) be vector spaces and suppose \( \left\{ {{v}_{1},\cdots ,{v}_{n}}\right\} \) is a basis for \( V \) . Then if \( L : V \rightarrow W \) is given by \( L{v}_{k} = {w}_{k} \in W \) and\n\n\[ L\left( {\mathop{\sum }\limits_{{k = 1}}^{n}{a}_{k}{v}_{k}}\right) \equiv \mathop{\sum }\limits_{{k = 1}}^{n}{a}_{k}L{v}_{k} = \mathop{\sum }\limits_{{k = 1}}^{n}{a}_{k}{w}_{k} \]\n\nthen \( L \) is well defined and is in \( \mathcal{L}\left( {V, W}\right) \) . Also, if \( L, M \) are two linear transformations such that \( L{v}_{k} = M{v}_{k} \) for all \( k \), then \( M = L \) .
Proof: \( L \) is well defined on \( V \) because, since \( \left\{ {{v}_{1},\cdots ,{v}_{n}}\right\} \) is a basis, there is exactly one way to write a given vector of \( V \) as a linear combination. Next, observe that \( L \) is obviously linear from the definition. If \( L, M \) are equal on the basis, then if \( \mathop{\sum }\limits_{{k = 1}}^{n}{a}_{k}{v}_{k} \) is an arbitrary vector of \( V \) ,\n\n\[ L\left( {\mathop{\sum }\limits_{{k = 1}}^{n}{a}_{k}{v}_{k}}\right) = \mathop{\sum }\limits_{{k = 1}}^{n}{a}_{k}L{v}_{k} = \mathop{\sum }\limits_{{k = 1}}^{n}{a}_{k}M{v}_{k} = M\left( {\mathop{\sum }\limits_{{k = 1}}^{n}{a}_{k}{v}_{k}}\right) \]\n\nand so \( L = M \) because they give the same result for every vector in \( V \) . ∎
Yes
Theorem 9.2.3 Let \( V \) and \( W \) be finite dimensional linear spaces of dimension \( n \) and \( m \) respectively Then \( \dim \left( {\mathcal{L}\left( {V, W}\right) }\right) = {mn} \).
Proof: Let two sets of bases be\n\n\[ \left\{ {{v}_{1},\cdots ,{v}_{n}}\right\} \text{and}\left\{ {{w}_{1},\cdots ,{w}_{m}}\right\} \]\n\nfor \( V \) and \( W \) respectively. Using Lemma 9.2.2, let \( {w}_{i}{v}_{j} \in \mathcal{L}\left( {V, W}\right) \) be the linear transformation defined on the basis, \( \left\{ {{v}_{1},\cdots ,{v}_{n}}\right\} \), by\n\n\[ {w}_{i}{v}_{k}\left( {v}_{j}\right) \equiv {w}_{i}{\delta }_{jk} \]\n\nwhere \( {\delta }_{ik} = 1 \) if \( i = k \) and 0 if \( i \neq k \) . I will show that \( L \in \mathcal{L}\left( {V, W}\right) \) is a linear combination of these special linear transformations called dyadics.\n\nThen let \( L \in \mathcal{L}\left( {V, W}\right) \) . Since \( \left\{ {{w}_{1},\cdots ,{w}_{m}}\right\} \) is a basis, there exist constants, \( {d}_{jk} \) such that\n\n\[ L{v}_{r} = \mathop{\sum }\limits_{{j = 1}}^{m}{d}_{jr}{w}_{j} \]\n\nNow consider the following sum of dyadics.\n\n\[ \mathop{\sum }\limits_{{j = 1}}^{m}\mathop{\sum }\limits_{{i = 1}}^{n}{d}_{ji}{w}_{j}{v}_{i} \]\n\nApply this to \( {v}_{r} \) . This yields\n\n\[ \mathop{\sum }\limits_{{j = 1}}^{m}\mathop{\sum }\limits_{{i = 1}}^{n}{d}_{ji}{w}_{j}{v}_{i}\left( {v}_{r}\right) = \mathop{\sum }\limits_{{j = 1}}^{m}\mathop{\sum }\limits_{{i = 1}}^{n}{d}_{ji}{w}_{j}{\delta }_{ir} = \mathop{\sum }\limits_{{j = 1}}^{m}{d}_{jr}{w}_{i} = L{v}_{r} \]\n\nTherefore, \( L = \mathop{\sum }\limits_{{j = 1}}^{m}\mathop{\sum }\limits_{{i = 1}}^{n}{d}_{ji}{w}_{j}{v}_{i} \) showing the span of the dyadics is all of \( \mathcal{L}\left( {V, W}\right) \) .\n\nNow consider whether these dyadics form a linearly independent set. Suppose\n\n\[ \mathop{\sum }\limits_{{i, k}}{d}_{ik}{w}_{i}{v}_{k} = \mathbf{0} \]\n\nAre all the scalars \( {d}_{ik} \) equal to 0 ?\n\n\[ \mathbf{0} = \mathop{\sum }\limits_{{i, k}}{d}_{ik}{w}_{i}{v}_{k}\left( {v}_{l}\right) = \mathop{\sum }\limits_{{i = 1}}^{m}{d}_{il}{w}_{i} \]\n\nand so, since \( \left\{ {{w}_{1},\cdots ,{w}_{m}}\right\} \) is a basis, \( {d}_{il} = 0 \) for each \( i = 1,\cdots, m \) . Since \( l \) is arbitrary, this shows \( {d}_{il} = 0 \) for all \( i \) and \( l \) . Thus these linear transformations form a basis and this shows that the dimension of \( \mathcal{L}\left( {V, W}\right) \) is \( {mn} \) as claimed because there are \( m \) choices for the \( {w}_{i} \) and \( n \) choices for the \( {v}_{j} \) .
Yes
The matrix of a linear transformation with respect to ordered bases \( \beta ,\gamma \) as described above is characterized by the requirement that multiplication of the components of \( v \) by \( {\left\lbrack L\right\rbrack }_{\gamma \beta } \) gives the components of \( {Lv} \) .
This happens because by definition, if \( v = \mathop{\sum }\limits_{i}{x}_{i}{v}_{i} \), then\n\n\[ \n{Lv} = \mathop{\sum }\limits_{i}{x}_{i}L{v}_{i} \equiv \mathop{\sum }\limits_{i}\mathop{\sum }\limits_{j}{\left\lbrack L\right\rbrack }_{ji}{x}_{i}{w}_{j} = \mathop{\sum }\limits_{j}\mathop{\sum }\limits_{i}{\left\lbrack L\right\rbrack }_{ji}{x}_{i}{w}_{j} \]\n\nand so the \( {j}^{\text{th }} \) component of \( {Lv} \) is \( \mathop{\sum }\limits_{i}{\left\lbrack L\right\rbrack }_{ji}{x}_{i} \), the \( {j}^{\text{th }} \) component of the matrix times the component vector of \( v \) . Could there be some other matrix which will do this? No, because if such a matrix is \( M \), then for any \( \mathbf{x} \), it follows from what was just shown that \( \left\lbrack L\right\rbrack \mathbf{x} = M\mathbf{x} \) . Hence \( \left\lbrack L\right\rbrack = M \) . \( \blacksquare \)
Yes
What is the matrix of this linear transformation with respect to this basis?
Using (9.2),\n\n\[ \left( \begin{array}{llll} 0 & 1 & {2x} & 3{x}^{2} \end{array}\right) = \left( \begin{array}{lll} 1 & x & {x}^{2} \end{array}\right) {\left\lbrack D\right\rbrack }_{\gamma \beta }.\]\n\nIt follows from this that the first column of \( {\left\lbrack D\right\rbrack }_{\gamma \beta } \) is\n\n\[ \left( \begin{array}{l} 0 \\ 0 \\ 0 \end{array}\right) \]\n\nThe next three columns of \( {\left\lbrack D\right\rbrack }_{\gamma \beta } \) are\n\n\[ \left( \begin{array}{l} 1 \\ 0 \\ 0 \end{array}\right) ,\left( \begin{array}{l} 0 \\ 2 \\ 0 \end{array}\right) ,\left( \begin{array}{l} 0 \\ 0 \\ 3 \end{array}\right) \]\n\nand so\n\n\[ {\left\lbrack D\right\rbrack }_{\gamma \beta } = \left( \begin{array}{llll} 0 & 1 & 0 & 0 \\ 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 3 \end{array}\right) \]
Yes
Example 9.3.4 Let \( \\beta \\equiv \\left\\{ {{\\mathbf{v}}_{1},\\cdots ,{\\mathbf{v}}_{n}} \\right\\} \) and \( \\gamma \\equiv \\left\\{ {{\\mathbf{w}}_{1},\\cdots ,{\\mathbf{w}}_{n}} \\right\\} \) be two bases for \( V \) . Let \( L \) be the linear transformation which maps \( {\\mathbf{v}}_{i} \) to \( {\\mathbf{w}}_{i} \) . Find \( {\\left\\lbrack L \\right\\rbrack }_{\\gamma \\beta } \) . In case \( V = {\\mathbb{F}}^{n} \) and letting \( \\delta = \\left\\{ {{\\mathbf{e}}_{1},\\cdots ,{\\mathbf{e}}_{n}} \\right\\} \), the usual basis for \( {\\mathbb{F}}^{n} \), find \( {\\left\\lbrack L \\right\\rbrack }_{\\delta } \) .
Letting \( {\\delta }_{ij} \) be the symbol which equals 1 if \( i = j \) and 0 if \( i \\neq j \), it follows that \( L = \) \( \\mathop{\\sum }\\limits_{{i, j}}{\\delta }_{ij}{\\mathbf{w}}_{i}{\\mathbf{v}}_{j} \) and so \( {\\left\\lbrack L \\right\\rbrack }_{\\gamma \\beta } = I \) the identity matrix. For the second part, you must have\n\n\[ \n\\left( \\begin{array}{lll} {\\mathbf{w}}_{1} & \\cdots & {\\mathbf{w}}_{n} \\end{array} \\right) = \\left( \\begin{array}{lll} {\\mathbf{v}}_{1} & \\cdots & {\\mathbf{v}}_{n} \\end{array} \\right) {\\left\\lbrack L \\right\\rbrack }_{\\delta }\n\]\n\nand so\n\n\[ \n{\\left\\lbrack L \\right\\rbrack }_{\\delta } = {\\left( \\begin{array}{lll} {\\mathbf{v}}_{1} & \\cdots & {\\mathbf{v}}_{n} \\end{array} \\right) }^{-1}\\left( \\begin{array}{lll} {\\mathbf{w}}_{1} & \\cdots & {\\mathbf{w}}_{n} \\end{array} \\right)\n\]\n\nwhere \( \\left( \\begin{array}{lll} {\\mathbf{w}}_{1} & \\cdots & {\\mathbf{w}}_{n} \\end{array} \\right) \) is the \( n \\times n \) matrix having \( {i}^{\\text{th }} \) column equal to \( {\\mathbf{w}}_{i} \) .
Yes
In the vector space of \( n \times n \) matrices, define\n\n\[ A \sim B \]\n\nif there exists an invertible matrix \( S \) such that\n\n\[ A = {S}^{-1}{BS} \]\n\nThen \( \sim \) is an equivalence relation and \( A \sim B \) if and only if whenever \( V \) is an \( n \) dimensional vector space, there exists \( L \in \mathcal{L}\left( {V, V}\right) \) and bases \( \left\{ {{v}_{1},\cdots ,{v}_{n}}\right\} \) and \( \left\{ {{w}_{1},\cdots ,{w}_{n}}\right\} \) such that \( A \) is the matrix of \( L \) with respect to \( \left\{ {{v}_{1},\cdots ,{v}_{n}}\right\} \) and \( B \) is the matrix of \( L \) with respect to \( \left\{ {{w}_{1},\cdots ,{w}_{n}}\right\} \)
Proof: \( A \sim A \) because \( S = I \) works in the definition. If \( A \sim B \), then \( B \sim A \), because\n\n\[ A = {S}^{-1}{BS} \]\n\nimplies \( B = {SA}{S}^{-1} \) . If \( A \sim B \) and \( B \sim C \), then\n\n\[ A = {S}^{-1}{BS}, B = {T}^{-1}{CT} \]\n\nand so\n\n\[ A = {S}^{-1}{T}^{-1}{CTS} = {\left( TS\right) }^{-1}{CTS} \]\n\nwhich implies \( A \sim C \) . This verifies the first part of the conclusion.\n\nNow let \( V \) be an \( n \) dimensional vector space, \( A \sim B \) so \( A = {S}^{-1}{BS} \) and pick a basis for \( V \),\n\n\[ \beta \equiv \left\{ {{v}_{1},\cdots ,{v}_{n}}\right\} \]\n\nDefine \( L \in \mathcal{L}\left( {V, V}\right) \) by\n\n\[ L{v}_{i} \equiv \mathop{\sum }\limits_{j}{a}_{ji}{v}_{j} \]\n\nwhere \( A = \left( {a}_{ij}\right) \) . Thus \( A \) is the matrix of the linear transformation \( L \) . Consider the diagram ![b5b55b40-8792-43a7-b53d-915a805ff8bc_230_0.jpg](images/b5b55b40-8792-43a7-b53d-915a805ff8bc_230_0.jpg)\n\nwhere \( {q}_{\gamma } \) is chosen to make the diagram commute. Thus we need \( S = {q}_{\gamma }^{-1}{q}_{\beta } \) which requires\n\n\[ {q}_{\gamma } = {q}_{\beta }{S}^{-1} \]\n\nThen it follows that \( B \) is the matrix of \( L \) with respect to the basis\n\n\[ \left\{ {{q}_{\gamma }{\mathbf{e}}_{1},\cdots ,{q}_{\gamma }{\mathbf{e}}_{n}}\right\} \equiv \left\{ {{w}_{1},\cdots ,{w}_{n}}\right\} \]\n\nThat is, \( A \) and \( B \) are matrices of the same linear transformation \( L \) . Conversely, if \( A \sim B \) , let \( L \) be as just described. Thus \( L = {q}_{\beta }A{q}_{\beta }^{-1} = {q}_{\beta }{SB}{S}^{-1}{q}_{\beta }^{-1} \) . Let \( {q}_{\gamma } \equiv {q}_{\beta }S \) and it follows that \( B \) is the matrix of \( L \) with respect to \( \left\{ {{q}_{\beta }S{\mathbf{e}}_{1},\cdots ,{q}_{\beta }S{\mathbf{e}}_{n}}\right\} \) . ∎
Yes
Proposition 9.3.10 Let \( A \) be an \( m \times n \) matrix and let \( L \) be the linear transformation which is defined by\n\n\[ L\left( {\mathop{\sum }\limits_{{k = 1}}^{n}{x}_{k}{\mathbf{e}}_{k}}\right) \equiv \mathop{\sum }\limits_{{k = 1}}^{n}\left( {A{\mathbf{e}}_{k}}\right) {x}_{k} \equiv \mathop{\sum }\limits_{{i = 1}}^{m}\mathop{\sum }\limits_{{k = 1}}^{n}{A}_{ik}{x}_{k}{\mathbf{e}}_{i} \]\n\nIn simple language, to find \( L\mathbf{x} \), you multiply on the left of \( \mathbf{x} \) by \( A \) . (A is the matrix of \( L \) with respect to the standard basis.) Then the matrix \( M \) of this linear transformation with respect to the bases \( \beta = \left\{ {{\mathbf{u}}_{1},\cdots ,{\mathbf{u}}_{n}}\right\} \) for \( {\mathbb{F}}^{n} \) and \( \gamma = \left\{ {{\mathbf{w}}_{1},\cdots ,{\mathbf{w}}_{m}}\right\} \) for \( {\mathbb{F}}^{m} \) is given by\n\n\[ M = {\left( \begin{array}{lll} {\mathbf{w}}_{1} & \cdots & {\mathbf{w}}_{m} \end{array}\right) }^{-1}A\left( \begin{array}{lll} {\mathbf{u}}_{1} & \cdots & {\mathbf{u}}_{n} \end{array}\right) \]\n\nwhere \( \left( \begin{array}{lll} {\mathbf{w}}_{1} & \cdots & {\mathbf{w}}_{m} \end{array}\right) \) is the \( m \times m \) matrix which has \( {\mathbf{w}}_{j} \) as its \( {j}^{\text{th }} \) column.
Proof: Consider the following diagram. ![b5b55b40-8792-43a7-b53d-915a805ff8bc_231_0.jpg](images/b5b55b40-8792-43a7-b53d-915a805ff8bc_231_0.jpg)\n\nHere the coordinate maps are defined in the usual way. Thus\n\n\[ {q}_{\beta }{\left( \begin{array}{lll} {x}_{1} & \cdots & {x}_{n} \end{array}\right) }^{T} \equiv \mathop{\sum }\limits_{{i = 1}}^{n}{x}_{i}{\mathbf{u}}_{i}. \]\n\nTherefore, \( {q}_{\beta } \) can be considered the same as multiplication of a vector in \( {\mathbb{F}}^{n} \) on the left by the matrix \( \left( \begin{array}{lll} {\mathbf{u}}_{1} & \cdots & {\mathbf{u}}_{n} \end{array}\right) \) . Similar considerations apply to \( {q}_{\gamma } \) . Thus it is desired to have the following for an arbitrary \( \mathbf{x} \in {\mathbb{F}}^{n} \) .\n\n\[ A\left( \begin{array}{lll} {\mathbf{u}}_{1} & \cdots & {\mathbf{u}}_{n} \end{array}\right) \mathbf{x} = \left( \begin{array}{lll} {\mathbf{w}}_{1} & \cdots & {\mathbf{w}}_{n} \end{array}\right) M\mathbf{x} \]\n\nTherefore, the conclusion of the proposition follows. -
Yes
Theorem 9.3.12 Let \( A \) be an \( n \times n \) matrix. Then \( A \) is diagonalizable if and only if \( {\mathbb{F}}^{n} \) has a basis of eigenvectors of \( A \). In this case, \( S \) of Definition 9.3.11 consists of the \( n \times n \) matrix whose columns are the eigenvectors of \( A \) and \( D = \operatorname{diag}\left( {{\lambda }_{1},\cdots ,{\lambda }_{n}}\right) \).
Proof: Suppose first that \( {\mathbb{F}}^{n} \) has a basis of eigenvectors, \( \left\{ {{\mathbf{v}}_{1},\cdots ,{\mathbf{v}}_{n}}\right\} \) where \( A{\mathbf{v}}_{i} = {\lambda }_{i}{\mathbf{v}}_{i} \). Then let \( S \) denote the matrix \( \left( \begin{array}{lll} {\mathbf{v}}_{1} & \cdots & {\mathbf{v}}_{n} \end{array}\right) \) and let \( {S}^{-1} \equiv \left( \begin{matrix} {\mathbf{u}}_{1}^{T} \\ \vdots \\ {\mathbf{u}}_{n}^{T} \end{matrix}\right) \) where \[ {\mathbf{u}}_{i}^{T}{\mathbf{v}}_{j} = {\delta }_{ij} \equiv \left\{ {\begin{array}{l} 1\text{ if }i = j \\ 0\text{ if }i \neq j \end{array}.}\right. \] \( {S}^{-1} \) exists because \( S \) has rank \( n \). Then from block multiplication, \[ {S}^{-1}{AS} = \left( \begin{matrix} {\mathbf{u}}_{1}^{T} \\ \vdots \\ {\mathbf{u}}_{n}^{T} \end{matrix}\right) \left( {A{\mathbf{v}}_{1}\cdots A{\mathbf{v}}_{n}}\right) = \left( \begin{matrix} {\mathbf{u}}_{1}^{T} \\ \vdots \\ {\mathbf{u}}_{n}^{T} \end{matrix}\right) \left( {{\lambda }_{1}{\mathbf{v}}_{1}\cdots {\lambda }_{n}{\mathbf{v}}_{n}}\right) \] \[ = \left( \begin{matrix} {\lambda }_{1} & 0 & \cdots & 0 \\ 0 & {\lambda }_{2} & 0 & \cdots \\ \vdots & \ddots & \ddots & \ddots \\ 0 & \cdots & 0 & {\lambda }_{n} \end{matrix}\right) = D. \] Next suppose \( A \) is diagonalizable so \( {S}^{-1}{AS} = D \equiv \operatorname{diag}\left( {{\lambda }_{1},\cdots ,{\lambda }_{n}}\right) \). Then the columns of \( S \) form a basis because \( {S}^{-1} \) is given to exist. It only remains to verify that these columns of \( S \) are eigenvectors. But letting \( S = \left( \begin{array}{lll} {\mathbf{v}}_{1} & \cdots & {\mathbf{v}}_{n} \end{array}\right) ,{AS} = {SD} \) and so \( \left( \begin{array}{lll} A{\mathbf{v}}_{1} & \cdots & A{\mathbf{v}}_{n} \end{array}\right) = \left( \begin{array}{lll} {\lambda }_{1}{\mathbf{v}}_{1} & \cdots & {\lambda }_{n}{\mathbf{v}}_{n} \end{array}\right) \) which shows that \( A{\mathbf{v}}_{i} = {\lambda }_{i}{\mathbf{v}}_{i} \). \( \blacksquare \)
Yes
Corollary 9.3.13 Let \( L \in \mathcal{L}\left( {V, V}\right) \) where \( V \) is an \( n \) dimensional vector space and let \( A \) be the matrix of this linear transformation with respect to a basis on \( V \) . Then it is possible to define\n\n\[ \det \left( L\right) \equiv \det \left( A\right) \]
Proof: Each choice of basis for \( V \) determines a matrix for \( L \) with respect to the basis. If \( A \) and \( B \) are two such matrices, it follows from Theorem 9.3.9 that\n\n\[ A = {S}^{-1}{BS} \]\n\nand so\n\n\[ \det \left( A\right) = \det \left( {S}^{-1}\right) \det \left( B\right) \det \left( S\right) . \]\n\nBut\n\n\[ 1 = \det \left( I\right) = \det \left( {{S}^{-1}S}\right) = \det \left( S\right) \det \left( {S}^{-1}\right) \]\n\nand so\n\n\[ \det \left( A\right) = \det \left( B\right) \;\blacksquare \]
Yes
Theorem 9.3.15 Let \( A \in \mathcal{L}\left( {X, Y}\right) \) . Then \( \operatorname{rank}\left( A\right) = \operatorname{rank}\left( M\right) \) where \( M \) is the matrix of A taken with respect to a pair of bases for the vector spaces \( X \), and \( Y \) .
Proof: Recall the diagram which describes what is meant by the matrix of \( A \) . Here the two bases are as indicated.\n\n\[ \begin{array}{l} \beta = \left\{ {{v}_{1},\cdots ,{v}_{n}}\right\} \;X\;A\;Y\;\left\{ {{w}_{1},\cdots ,{w}_{m}}\right\} = \gamma \\ {q}_{\beta } \uparrow \; \circ \; \uparrow {q}_{\gamma } \\ {\mathbb{F}}^{n}\;M\;{\mathbb{F}}^{m} \\ \end{array} \]\n\nLet \( \left\{ {A{x}_{1},\cdots, A{x}_{r}}\right\} \) be a basis for \( {AX} \) . Thus\n\n\[ \left\{ {{q}_{\gamma }M{q}_{\beta }^{-1}{x}_{1},\cdots ,{q}_{\gamma }M{q}_{\beta }^{-1}{x}_{r}}\right\} \]\n\nis a basis for \( {AX} \) . It follows that\n\n\[ \left\{ {M{q}_{X}^{-1}{x}_{1},\cdots, M{q}_{X}^{-1}{x}_{r}}\right\} \]\n\nis linearly independent and so \( \operatorname{rank}\left( A\right) \leq \operatorname{rank}\left( M\right) \) . However, one could interchange the roles of \( M \) and \( A \) in the above argument and thereby turn the inequality around.
Yes
Theorem 9.3.16 Let \( L \in \mathcal{L}\left( {V, V}\right) \) where \( V \) is a finite dimensional vector space. Then the following are equivalent.\n\n1. \( L \) is one to one.\n\n2. \( L \) maps a basis to a basis.\n\n3. \( L \) is onto.\n\n4. \( \det \left( L\right) \neq 0 \)\n\n5. If \( {Lv} = 0 \) then \( v = 0 \) .
Proof: Suppose first \( L \) is one to one and let \( \beta = {\left\{ {v}_{i}\right\} }_{i = 1}^{n} \) be a basis. Then if \( \mathop{\sum }\limits_{{i = 1}}^{n}{c}_{i}L{v}_{i} = \) 0 it follows \( L\left( {\mathop{\sum }\limits_{{i = 1}}^{n}{c}_{i}{v}_{i}}\right) = 0 \) which means that since \( L\left( 0\right) = 0 \), and \( L \) is one to one, it must be the case that \( \mathop{\sum }\limits_{{i = 1}}^{n}{c}_{i}{v}_{i} = 0 \) . Since \( \left\{ {v}_{i}\right\} \) is a basis, each \( {c}_{i} = 0 \) which shows \( \left\{ {L{v}_{i}}\right\} \) is a linearly independent set. Since there are \( n \) of these, it must be that this is a basis.\n\nNow suppose 2.). Then letting \( \left\{ {v}_{i}\right\} \) be a basis, and \( y \in V \), it follows from part 2.) that there are constants, \( \left\{ {c}_{i}\right\} \) such that \( y = \mathop{\sum }\limits_{{i = 1}}^{n}{c}_{i}L{v}_{i} = L\left( {\mathop{\sum }\limits_{{i = 1}}^{n}{c}_{i}{v}_{i}}\right) \) . Thus \( L \) is onto. It has been shown that 2.) implies 3.).\n\nNow suppose 3.). Then the operation consisting of multiplication by the matrix of \( L,\left\lbrack L\right\rbrack \) , must be onto. However, the vectors in \( {\mathbb{F}}^{n} \) so obtained, consist of linear combinations of the columns of \( \left\lbrack L\right\rbrack \) . Therefore, the column rank of \( \left\lbrack L\right\rbrack \) is \( n \) . By Theorem 3.3.23 this equals the determinant rank and so \( \det \left( \left\lbrack L\right\rbrack \right) \equiv \det \left( L\right) \neq 0 \) .\n\nNow assume 4.) If \( {Lv} = 0 \) for some \( v \neq 0 \), it follows that \( \left\lbrack L\right\rbrack \mathbf{x} = 0 \) for some \( \mathbf{x} \neq \mathbf{0} \) . Therefore, the columns of \( \left\lbrack L\right\rbrack \) are linearly dependent and so by Theorem 3.3.23, \( \det \left( \left\lbrack L\right\rbrack \right) = \) \( \det \left( L\right) = 0 \) contrary to 4.). Therefore,4.) implies 5.).\n\nNow suppose 5.) and suppose \( {Lv} = {Lw} \) . Then \( L\left( {v - w}\right) = 0 \) and so by 5.), \( v - w = 0 \) showing that \( L \) is one to one. -
Yes
Determine the matrix for the transformation mapping \( {\mathbb{R}}^{2} \) to \( {\mathbb{R}}^{2} \) which consists of rotating every vector counter clockwise through an angle of \( \theta \) .
Let \( {\mathbf{e}}_{1} \equiv \left( \begin{array}{l} 1 \\ 0 \end{array}\right) \) and \( {\mathbf{e}}_{2} \equiv \left( \begin{array}{l} 0 \\ 1 \end{array}\right) \) . These identify the geometric vectors which point along the positive \( x \) axis and positive \( y \) axis as shown.\n\nFrom Theorem 9.3.18, you only need to find \( T{\mathbf{e}}_{1} \) and \( T{\mathbf{e}}_{2} \), the first being the first column of the desired matrix \( A \) and the second being the second column. From drawing a picture and doing a little geometry, you see that\n\n\[ T{\mathbf{e}}_{1} = \left( \begin{matrix} \cos \theta \\ \sin \theta \end{matrix}\right), T{\mathbf{e}}_{2} = \left( \begin{matrix} - \sin \theta \\ \cos \theta \end{matrix}\right) . \]\n\nTherefore, from Theorem 9.3.18,\n\n\[ A = \left( \begin{matrix} \cos \theta & - \sin \theta \\ \sin \theta & \cos \theta \end{matrix}\right) \]
Yes
Find the matrix of the linear transformation which is obtained by first rotating all vectors through an angle of \( \phi \) and then through an angle \( \theta \) . Thus you want the linear transformation which rotates all angles through an angle of \( \theta + \phi \) .
Let \( {T}_{\theta + \phi } \) denote the linear transformation which rotates every vector through an angle of \( \theta + \phi \) . Then to get \( {T}_{\theta + \phi } \), you could first do \( {T}_{\phi } \) and then do \( {T}_{\theta } \) where \( {T}_{\phi } \) is the linear transformation which rotates through an angle of \( \phi \) and \( {T}_{\theta } \) is the linear transformation which rotates through an angle of \( \theta \) . Denoting the corresponding matrices by \( {A}_{\theta + \phi },{A}_{\phi } \) , and \( {A}_{\theta } \), you must have for every \( \mathbf{x} \)\n\n\[ \n{A}_{\theta + \phi }\mathbf{x} = {T}_{\theta + \phi }\mathbf{x} = {T}_{\theta }{T}_{\phi }\mathbf{x} = {A}_{\theta }{A}_{\phi }\mathbf{x}. \n\]\n\nConsequently, you must have\n\n\[ \n{A}_{\theta + \phi } = \left( \begin{matrix} \cos \left( {\theta + \phi }\right) & - \sin \left( {\theta + \phi }\right) \\ \sin \left( {\theta + \phi }\right) & \cos \left( {\theta + \phi }\right) \end{matrix}\right) = {A}_{\theta }{A}_{\phi }\n\]\n\n\[ \n= \left( \begin{matrix} \cos \theta & - \sin \theta \\ \sin \theta & \cos \theta \end{matrix}\right) \left( \begin{matrix} \cos \phi & - \sin \phi \\ \sin \phi & \cos \phi \end{matrix}\right) . \n\]\n\nTherefore,\n\n\[ \n\left( \begin{matrix} \cos \left( {\theta + \phi }\right) & - \sin \left( {\theta + \phi }\right) \\ \sin \left( {\theta + \phi }\right) & \cos \left( {\theta + \phi }\right) \end{matrix}\right) = \left( \begin{matrix} \cos \theta \cos \phi - \sin \theta \sin \phi & - \cos \theta \sin \phi - \sin \theta \cos \phi \\ \sin \theta \cos \phi + \cos \theta \sin \phi & \cos \theta \cos \phi - \sin \theta \sin \phi \end{matrix}\right) . \n\]
Yes
Find the matrix of the linear transformation which rotates vectors in \( {\mathbb{R}}^{3} \) counterclockwise about the positive \( z \) axis.
Let \( T \) be the name of this linear transformation. In this case, \( T{\mathbf{e}}_{3} = {\mathbf{e}}_{3}, T{\mathbf{e}}_{1} = \) \( {\left( \cos \theta ,\sin \theta ,0\right) }^{T} \), and \( T{\mathbf{e}}_{2} = {\left( -\sin \theta ,\cos \theta ,0\right) }^{T} \) . Therefore, the matrix of this transformation is just\n\n\[ \left( \begin{matrix} \cos \theta & - \sin \theta & 0 \\ \sin \theta & \cos \theta & 0 \\ 0 & 0 & 1 \end{matrix}\right) \]
Yes
Let the projection map be defined above and let \( \mathbf{u} = {\left( 1,2,3\right) }^{T} \) . Find the matrix of this linear transformation with respect to the usual basis.
You can find this matrix in the same way as in earlier examples. \( {\operatorname{proj}}_{\mathbf{u}}\left( {\mathbf{e}}_{i}\right) \) gives the \( {i}^{th} \) column of the desired matrix. Therefore, it is only necessary to find\n\n\[ \n{\operatorname{proj}}_{\mathbf{u}}\left( {\mathbf{e}}_{i}\right) \equiv \left( \frac{{\mathbf{e}}_{i} \cdot \mathbf{u}}{\mathbf{u} \cdot \mathbf{u}}\right) \mathbf{u} \]\n\nFor the given vector in the example, this implies the columns of the desired matrix are\n\n\[ \n\frac{1}{14}\left( \begin{array}{l} 1 \\ 2 \\ 3 \end{array}\right) ,\frac{2}{14}\left( \begin{array}{l} 1 \\ 2 \\ 3 \end{array}\right) ,\frac{3}{14}\left( \begin{array}{l} 1 \\ 2 \\ 3 \end{array}\right) \]\n\nHence the matrix is\n\n\[ \n\frac{1}{14}\left( \begin{array}{lll} 1 & 2 & 3 \\ 2 & 4 & 6 \\ 3 & 6 & 9 \end{array}\right) \]\n
Yes
Find the matrix of the linear transformation which reflects all vectors in \( {\mathbb{R}}^{3} \) through the \( {xz} \) plane.
As illustrated above, you just need to find \( T{\mathbf{e}}_{i} \) where \( T \) is the name of the transformation. But \( T{\mathbf{e}}_{1} = {\mathbf{e}}_{1}, T{\mathbf{e}}_{3} = {\mathbf{e}}_{3} \), and \( T{\mathbf{e}}_{2} = - {\mathbf{e}}_{2} \) so the matrix is\n\n\[ \left( \begin{matrix} 1 & 0 & 0 \\ 0 & - 1 & 0 \\ 0 & 0 & 1 \end{matrix}\right) \]
Yes
Find the matrix of the linear transformation which first rotates counter clockwise about the positive \( z \) axis and then reflects through the \( {xz} \) plane.
This linear transformation is just the composition of two linear transformations having matrices \[ \left( \begin{matrix} \cos \theta & - \sin \theta & 0 \\ \sin \theta & \cos \theta & 0 \\ 0 & 0 & 1 \end{matrix}\right) ,\left( \begin{matrix} 1 & 0 & 0 \\ 0 & - 1 & 0 \\ 0 & 0 & 1 \end{matrix}\right) \] respectively. Thus the matrix desired is \[ \left( \begin{matrix} 1 & 0 & 0 \\ 0 & - 1 & 0 \\ 0 & 0 & 1 \end{matrix}\right) \left( \begin{matrix} \cos \theta & - \sin \theta & 0 \\ \sin \theta & \cos \theta & 0 \\ 0 & 0 & 1 \end{matrix}\right) = \left( \begin{matrix} \cos \theta & - \sin \theta & 0 \\ - \sin \theta & - \cos \theta & 0 \\ 0 & 0 & 1 \end{matrix}\right) . \]
Yes
Lemma 9.4.2 When \( \lambda \) is an eigenvalue of \( A \) which is also in \( \mathbb{F} \), the field of scalars, then there exists \( v \neq 0 \) such that \( {Av} = {\lambda v} \) .
Proof: This follows from Theorem 9.3.16. Since \( \lambda \in \mathbb{F} \), \[ {\lambda I} - A \in \mathcal{L}\left( {V, V}\right) \] and since it has zero determinant, it is not one to one. -
Yes
Lemma 9.4.3 Let \( A \in \mathcal{L}\left( {V, V}\right) \) where \( V \) is a finite dimensional vector space of dimension \( n \) with arbitrary field of scalars. Then there exists a unique polynomial of the form\n\n\[ p\left( \lambda \right) = {\lambda }^{m} + {c}_{m - 1}{\lambda }^{m - 1} + \cdots + {c}_{1}\lambda + {c}_{0} \]\n\nsuch that \( p\left( A\right) = 0 \) and \( m \) is as small as possible for this to occur.
Proof: Consider the linear transformations, \( I, A,{A}^{2},\cdots ,{A}^{{n}^{2}} \). There are \( {n}^{2} + 1 \) of these transformations and so by Theorem 9.2.3 the set is linearly dependent. Thus there exist constants, \( {c}_{i} \in \mathbb{F} \) such that\n\n\[ {c}_{0}I + \mathop{\sum }\limits_{{k = 1}}^{{n}^{2}}{c}_{k}{A}^{k} = 0. \]\n\nThis implies there exists a polynomial, \( q\left( \lambda \right) \) which has the property that \( q\left( A\right) = 0 \). In fact, one example is \( q\left( \lambda \right) \equiv {c}_{0} + \mathop{\sum }\limits_{{k = 1}}^{{n}^{2}}{c}_{k}{\lambda }^{k} \). Dividing by the leading term, it can be assumed this polynomial is of the form \( {\lambda }^{m} + {c}_{m - 1}{\lambda }^{m - 1} + \cdots + {c}_{1}\lambda + {c}_{0} \), a monic polynomial. Now consider all such monic polynomials, \( q \) such that \( q\left( A\right) = 0 \) and pick the one which has the smallest degree \( m \). This is called the minimal polynomial and will be denoted here by \( p\left( \lambda \right) \). If there were two minimal polynomials, the one just found and another,\n\n\[ {\lambda }^{m} + {d}_{m - 1}{\lambda }^{m - 1} + \cdots + {d}_{1}\lambda + {d}_{0} \]\n\nThen subtracting these would give the following polynomial,\n\n\[ {q}^{\prime }\left( \lambda \right) = \left( {{d}_{m - 1} - {c}_{m - 1}}\right) {\lambda }^{m - 1} + \cdots + \left( {{d}_{1} - {c}_{1}}\right) \lambda + {d}_{0} - {c}_{0} \]\n\nSince \( {q}^{\prime }\left( A\right) = 0 \), this requires each \( {d}_{k} = {c}_{k} \) since otherwise you could divide by \( {d}_{k} - {c}_{k} \) where \( k \) is the largest one which is nonzero. Thus the choice of \( m \) would be contradicted. ∎
Yes
Theorem 9.4.4 Let \( V \) be a nonzero finite dimensional vector space of dimension \( n \) with the field of scalars equal to \( \mathbb{F} \). Suppose \( A \in \mathcal{L}\left( {V, V}\right) \) and for \( p\left( \lambda \right) \) the minimal polynomial defined above, let \( \mu \in \mathbb{F} \) be a zero of this polynomial. Then there exists \( v \neq 0, v \in V \) such that\n\n\[ \n{Av} = {\mu v}.\n\]\n\nIf \( \mathbb{F} = \mathbb{C} \), then \( A \) always has an eigenvector and eigenvalue. Furthermore, if \( \left\{ {{\lambda }_{1},\cdots ,{\lambda }_{m}}\right\} \) are the zeros of \( p\left( \lambda \right) \) in \( \mathbb{F} \), these are exactly the eigenvalues of \( A \) for which there exists an eigenvector in \( V \) .
Proof: Suppose first \( \mu \) is a zero of \( p\left( \lambda \right) \). Since \( p\left( \mu \right) = 0 \), it follows\n\n\[ \np\left( \lambda \right) = \left( {\lambda - \mu }\right) k\left( \lambda \right)\n\]\n\nwhere \( k\left( \lambda \right) \) is a polynomial having coefficients in \( \mathbb{F} \). Since \( p \) has minimal degree, \( k\left( A\right) \neq 0 \) and so there exists a vector, \( u \neq 0 \) such that \( k\left( A\right) u \equiv v \neq 0 \). But then\n\n\[ \n\left( {A - {\mu I}}\right) v = \left( {A - {\mu I}}\right) k\left( A\right) \left( u\right) = \mathbf{0}.\n\]\n\nThe next claim about the existence of an eigenvalue follows from the fundamental theorem of algebra and what was just shown.\n\nIt has been shown that every zero of \( p\left( \lambda \right) \) is an eigenvalue which has an eigenvector in \( V \). Now suppose \( \mu \) is an eigenvalue which has an eigenvector in \( V \) so that \( {Av} = {\mu v} \) for some \( v \in V, v \neq 0 \). Does it follow \( \mu \) is a zero of \( p\left( \lambda \right) \) ?\n\n\[ \n\mathbf{0} = p\left( A\right) v = p\left( \mu \right) v\n\]\n\nand so \( \mu \) is indeed a zero of \( p\left( \lambda \right) \).
Yes
Lemma 10.1.2 Whenever \( L \in \mathcal{L}\left( {V, W}\right) ,\ker \left( L\right) \) is a subspace.
Proof: If \( a, b \) are scalars and \( v, w \) are in \( \ker \left( L\right) \), then\n\n\[ L\left( {{av} + {bw}}\right) = {aL}\left( v\right) + {bL}\left( w\right) = 0 + 0 = 0\blacksquare \]
No
Theorem 10.1.3 Let \( A \in \mathcal{L}\left( {V, W}\right) \) and \( B \in \mathcal{L}\left( {W, U}\right) \) where \( V, W, U \) are all vector spaces over a field \( \mathbb{F} \) . Suppose also that \( \ker \left( A\right) \) and \( A\left( {\ker \left( {BA}\right) }\right) \) are finite dimensional subspaces. Then\n\n\[ \dim \left( {\ker \left( {BA}\right) }\right) \leq \dim \left( {\ker \left( B\right) }\right) + \dim \left( {\ker \left( A\right) }\right) . \]\n
Proof: If \( \mathbf{x} \in \ker \left( {BA}\right) \), then \( A\mathbf{x} \in \ker \left( B\right) \) and so \( A\left( {\ker \left( {BA}\right) }\right) \subseteq \ker \left( B\right) \) . The following picture may help.\n\n![b5b55b40-8792-43a7-b53d-915a805ff8bc_245_0.jpg](images/b5b55b40-8792-43a7-b53d-915a805ff8bc_245_0.jpg)\n\nNow let \( \left\{ {{x}_{1},\cdots ,{x}_{n}}\right\} \) be a basis of \( \ker \left( A\right) \) and let \( \left\{ {A{y}_{1},\cdots, A{y}_{m}}\right\} \) be a basis for \( A\left( {\ker \left( {BA}\right) }\right) \) . Take any \( z \in \ker \left( {BA}\right) \) . Then \( {Az} = \mathop{\sum }\limits_{{i = 1}}^{m}{a}_{i}A{y}_{i} \) and so\n\n\[ A\left( {z - \mathop{\sum }\limits_{{i = 1}}^{m}{a}_{i}{y}_{i}}\right) = \mathbf{0} \]\n\nwhich means \( z - \mathop{\sum }\limits_{{i = 1}}^{m}{a}_{i}{y}_{i} \in \ker \left( A\right) \) and so there are scalars \( {b}_{i} \) such that\n\n\[ z - \mathop{\sum }\limits_{{i = 1}}^{m}{a}_{i}{y}_{i} = \mathop{\sum }\limits_{{j = 1}}^{n}{b}_{i}{x}_{i} \]\n\nIt follows \( \operatorname{span}\left( {{x}_{1},\cdots ,{x}_{n},{y}_{1},\cdots ,{y}_{m}}\right) \supseteq \ker \left( {BA}\right) \) and so by the first part,(See the picture.)\n\n\[ \dim \left( {\ker \left( {BA}\right) }\right) \leq n + m \leq \dim \left( {\ker \left( A\right) }\right) + \dim \left( {\ker \left( B\right) }\right) \;\blacksquare \]\n
Yes
Lemma 10.1.5 If \( V = {V}_{1} \oplus \cdots \oplus {V}_{r} \) and if \( {\beta }_{i} = \left\{ {{v}_{1}^{i},\cdots ,{v}_{{m}_{i}}^{i}}\right\} \) is a basis for \( {V}_{i} \), then a basis for \( V \) is \( \left\{ {{\beta }_{1},\cdots ,{\beta }_{r}}\right\} \) .
Proof: Suppose \( \mathop{\sum }\limits_{{i = 1}}^{r}\mathop{\sum }\limits_{{j = 1}}^{{m}_{i}}{c}_{ij}{v}_{j}^{i} = 0 \) . then since it is a direct sum, it follows for each \( i \) ,\n\n\[ \mathop{\sum }\limits_{{j = 1}}^{{m}_{i}}{c}_{ij}{v}_{j}^{i} = 0 \]\n\nand now since \( \left\{ {{v}_{1}^{i},\cdots ,{v}_{{m}_{i}}^{i}}\right\} \) is a basis, each \( {c}_{ij} = 0 \) . ∎
Yes
Lemma 10.1.6 Let \( {L}_{i} \) be in \( \mathcal{L}\left( {V, V}\right) \) and suppose for \( i \neq j,{L}_{i}{L}_{j} = {L}_{j}{L}_{i} \) and also \( {L}_{i} \) is one to one on \( \ker \left( {L}_{j}\right) \) whenever \( i \neq j \) . Then \[ \ker \left( {\mathop{\prod }\limits_{{i = 1}}^{p}{L}_{i}}\right) = \ker \left( {L}_{1}\right) \oplus + \cdots + \oplus \ker \left( {L}_{p}\right) \] Here \( \mathop{\prod }\limits_{{i = 1}}^{p}{L}_{i} \) is the product of all the linear transformations. A symbol like \( \mathop{\prod }\limits_{{j \neq i}}{L}_{j} \) is the product of all of them but \( {L}_{i} \) .
Proof: Note that since the operators commute, \( {L}_{j} : \ker \left( {L}_{i}\right) \rightarrow \ker \left( {L}_{i}\right) \) . Here is why. If \( {L}_{i}y = 0 \) so that \( y \in \ker \left( {L}_{i}\right) \), then \[ {L}_{i}{L}_{j}y = {L}_{j}{L}_{i}y = {L}_{j}0 = 0 \] and so \( {L}_{j} : \ker \left( {L}_{i}\right) \rightarrow \ker \left( {L}_{i}\right) \) . Suppose \[ \mathop{\sum }\limits_{{i = 1}}^{p}{v}_{i} = 0,{v}_{i} \in \ker \left( {L}_{i}\right) \] but some \( {v}_{i} \neq 0 \) . Then do \( \mathop{\prod }\limits_{{j \neq i}}{L}_{j} \) to both sides. Since the linear transformations commute, this results in \[ \mathop{\prod }\limits_{{j \neq i}}{L}_{j}\left( {v}_{i}\right) = 0 \] which contradicts the assumption that these \( {L}_{j} \) are one to one and the observation that they map \( \ker \left( {L}_{i}\right) \) to \( \ker \left( {L}_{i}\right) \) . Thus if \[ \mathop{\sum }\limits_{i}{v}_{i} = 0,{v}_{i} \in \ker \left( {L}_{i}\right) \] then each \( {v}_{i} = 0 \) . Suppose \( {\beta }_{i} = \left\{ {{v}_{1}^{i},\cdots ,{v}_{{m}_{i}}^{i}}\right\} \) is a basis for \( \ker \left( {L}_{i}\right) \) . Then from what was just shown and Lemma 10.1.5, \( \left\{ {{\beta }_{1},\cdots ,{\beta }_{p}}\right\} \) must be linearly independent and a basis for \[ \ker \left( {L}_{1}\right) \oplus + \cdots + \oplus \ker \left( {L}_{p}\right) . \] It is also clear that since these operators commute, \[ \ker \left( {L}_{1}\right) \oplus + \cdots + \oplus \ker \left( {L}_{p}\right) \subseteq \ker \left( {\mathop{\prod }\limits_{{i = 1}}^{p}{L}_{i}}\right) \] Therefore, by Sylvester's theorem and the above, \[ \dim \left( {\ker \left( {\mathop{\prod }\limits_{{i = 1}}^{p}{L}_{i}}\right) }\right) \leq \mathop{\sum }\limits_{{j = 1}}^{p}\dim \left( {\ker \left( {L}_{j}\right) }\right) \] \[ = \dim \left( {\ker \left( {L}_{1}\right) \oplus + \cdots + \oplus \ker \left( {L}_{p}\right) }\right) \leq \dim \left( {\ker \left( {\mathop{\prod }\limits_{{i = 1}}^{p}{L}_{i}}\right) }\right) . \] Now in general, if \( W \) is a subspace of \( V \), a finite dimensional vector space and the two have the same dimension, then \( W = V \) . This is because \( W \) has a basis and if \( v \) is not in the span of this basis, then \( v \) adjoined to the basis of \( W \) would be a linearly independent set, yielding a linearly independent set which has more vectors in it than a basis, a contradiction. It follows that \[ \ker \left( {L}_{1}\right) \oplus + \cdots + \oplus \ker \left( {L}_{p}\right) = \ker \left( {\mathop{\prod }\limits_{{i = 1}}^{p}{L}_{i}}\right) \;\blacksquare \]
Yes
Lemma 10.2.2 Let \( L \in \mathcal{L}\left( {V, V}\right) \) where \( V \) is an \( n \) dimensional vector space. Then if \( L \) is one to one, it follows that \( L \) is also onto. In fact, if \( \left\{ {{v}_{1},\cdots ,{v}_{n}}\right\} \) is a basis, then so is \( \left\{ {L{v}_{1},\cdots, L{v}_{n}}\right\} \) .
Proof: Let \( \left\{ {{v}_{1},\cdots ,{v}_{n}}\right\} \) be a basis for \( V \) . Then I claim that \( \left\{ {L{v}_{1},\cdots, L{v}_{n}}\right\} \) is also a basis for \( V \) . First of all, I show \( \left\{ {L{v}_{1},\cdots, L{v}_{n}}\right\} \) is linearly independent. Suppose\n\n\[ \mathop{\sum }\limits_{{k = 1}}^{n}{c}_{k}L{v}_{k} = 0 \]\n\nThen\n\n\[ L\left( {\mathop{\sum }\limits_{{k = 1}}^{n}{c}_{k}{v}_{k}}\right) = 0 \]\n\nand since \( L \) is one to one, it follows\n\n\[ \mathop{\sum }\limits_{{k = 1}}^{n}{c}_{k}{v}_{k} = 0 \]\n\nwhich implies each \( {c}_{k} = 0 \) . Therefore, \( \left\{ {L{v}_{1},\cdots, L{v}_{n}}\right\} \) is linearly independent. If there exists \( w \) not in the span of these vectors, then by Lemma 8.2.10, \( \left\{ {L{v}_{1},\cdots, L{v}_{n}, w}\right\} \) would be independent and this contradicts the exchange theorem, Theorem 8.2.4 because it would be a linearly independent set having more vectors than the spanning set \( \left\{ {{v}_{1},\cdots ,{v}_{n}}\right\} \) .
Yes
Theorem 10.2.4 In the context of Definition 7.2.3, \[ V = {V}_{1} \oplus \cdots \oplus {V}_{q} \] and each \( {V}_{k} \) is A invariant, meaning \( A\left( {V}_{k}\right) \subseteq {V}_{k}.{\phi }_{l}\left( A\right) \) is one to one on each \( {V}_{k} \) for \( k \neq l \) . If \( {\beta }_{i} = \left\{ {{v}_{1}^{i},\cdots ,{v}_{{m}_{i}}^{i}}\right\} \) is a basis for \( {V}_{i} \), then \( \left\{ {{\beta }_{1},{\beta }_{2},\cdots ,{\beta }_{q}}\right\} \) is a basis for \( V \) .
Proof: It is clear \( {V}_{k} \) is a subspace which is \( A \) invariant because \( A \) commutes with \( {\phi }_{k}{\left( A\right) }^{{m}_{k}} \) . It is clear the operators \( {\phi }_{k}{\left( A\right) }^{{r}_{k}} \) commute. Thus if \( v \in {V}_{k} \) , \[ {\phi }_{k}{\left( A\right) }^{{r}_{k}}{\phi }_{l}{\left( A\right) }^{{r}_{l}}v = {\phi }_{l}{\left( A\right) }^{{r}_{l}}{\phi }_{k}{\left( A\right) }^{{r}_{k}}v = {\phi }_{l}{\left( A\right) }^{{r}_{l}}0 = 0 \] and so \( {\phi }_{l}{\left( A\right) }^{{r}_{l}} : {V}_{k} \rightarrow {V}_{k} \) . I claim \( {\phi }_{l}\left( A\right) \) is one to one on \( {V}_{k} \) whenever \( k \neq l \) . The two polynomials \( {\phi }_{l}\left( \lambda \right) \) and \( {\phi }_{k}{\left( \lambda \right) }^{{r}_{k}} \) are relatively prime so there exist polynomials \( m\left( \lambda \right), n\left( \lambda \right) \) such that \[ m\left( \lambda \right) {\phi }_{l}\left( \lambda \right) + n\left( \lambda \right) {\phi }_{k}{\left( \lambda \right) }^{{r}_{k}} = 1 \] It follows that the sum of all coefficients of \( \lambda \) raised to a positive power are zero and the constant term on the left is 1 . Therefore, using the convention \( {A}^{0} = I \) it follows \[ m\left( A\right) {\phi }_{l}\left( A\right) + n\left( A\right) {\phi }_{k}{\left( A\right) }^{{r}_{k}} = I \] If \( v \in {V}_{k} \), then from the above, \[ m\left( A\right) {\phi }_{l}\left( A\right) v + n\left( A\right) {\phi }_{k}{\left( A\right) }^{{r}_{k}}v = v \] Since \( v \) is in \( {V}_{k} \), it follows by definition, \[ m\left( A\right) {\phi }_{l}\left( A\right) v = v \] and so \( {\phi }_{l}\left( A\right) v \neq 0 \) unless \( v = 0 \) . Thus \( {\phi }_{l}\left( A\right) \) and hence \( {\phi }_{l}{\left( A\right) }^{{r}_{l}} \) is one to one on \( {V}_{k} \) for every \( k \neq l \) . By Lemma 10.1.6 and the fact that \( \ker \left( {\mathop{\prod }\limits_{{k = 1}}^{q}{\phi }_{k}{\left( \lambda \right) }^{{r}_{k}}}\right) = V,{0.2} \) is obtained. The claim about the bases follows from Lemma 10.1.5. ∎
Yes
Corollary 10.2.5 Let the minimal polynomial of \( A \) be \( p\left( \lambda \right) = \mathop{\prod }\limits_{{k = 1}}^{q}{\phi }_{k}{\left( \lambda \right) }^{{m}_{k}} \) where each \( {\phi }_{k} \) is irreducible. Let \( {V}_{k} = \ker \left( {\phi {\left( A\right) }^{{m}_{k}}}\right) \) . Then\n\n\[ \n{V}_{1} \oplus \cdots \oplus {V}_{q} = V \n\]\n\nand letting \( {A}_{k} \) denote the restriction of \( A \) to \( {V}_{k} \), it follows the minimal polynomial of \( {A}_{k} \) is \( {\phi }_{k}{\left( \lambda \right) }^{{m}_{k}} \) .
Proof: Recall the direct sum, \( {V}_{1} \oplus \cdots \oplus {V}_{q} = V \) where \( {V}_{k} = \ker \left( {{\phi }_{k}{\left( A\right) }^{{m}_{k}}}\right) \) for \( p\left( \lambda \right) = \) \( \mathop{\prod }\limits_{{k = 1}}^{q}{\phi }_{k}{\left( \lambda \right) }^{{m}_{k}} \) the minimal polynomial for \( A \) where the \( {\phi }_{k}\left( \lambda \right) \) are all irreducible. Thus each \( {V}_{k} \) is invariant with respect to \( A \) . What is the minimal polynomial of \( {A}_{k} \), the restriction of \( A \) to \( {V}_{k} \) ? First note that \( {\phi }_{k}{\left( {A}_{k}\right) }^{{m}_{k}}\left( {V}_{k}\right) = \{ 0\} \) by definition. Thus if \( \eta \left( \lambda \right) \) is the minimal polynomial for \( {A}_{k} \) then it must divide \( {\phi }_{k}{\left( \lambda \right) }^{{m}_{k}} \) and so by Corollary 8.3.11 \( \eta \left( \lambda \right) = {\phi }_{k}{\left( \lambda \right) }^{{r}_{k}} \) where \( {r}_{k} \leq {m}_{k} \) . Could \( {r}_{k} < {m}_{k} \) ? No, this is not possible because then \( p\left( \lambda \right) \) would fail to be the minimal polynomial for \( A \) . You could substitute for the term \( {\phi }_{k}{\left( \lambda \right) }^{{m}_{k}} \) in the factorization of \( p\left( \lambda \right) \) with \( {\phi }_{k}{\left( \lambda \right) }^{{r}_{k}} \) and the resulting polynomial \( {p}^{\prime } \) would satisfy \( {p}^{\prime }\left( A\right) = 0 \) . Here is why. From Theorem 10.2.4, a typical \( x \in V \) is of the form\n\n\[ \n\mathop{\sum }\limits_{{i = 1}}^{q}{v}_{i},{v}_{i} \in {V}_{i} \n\]\n\nThen since all the factors commute,\n\n\[ \n{p}^{\prime }\left( A\right) \left( {\mathop{\sum }\limits_{{i = 1}}^{q}{v}_{i}}\right) = \mathop{\prod }\limits_{{i \neq k}}^{q}{\phi }_{i}{\left( A\right) }^{{m}_{i}}{\phi }_{k}{\left( A\right) }^{{r}_{k}}\left( {\mathop{\sum }\limits_{{i = 1}}^{q}{v}_{i}}\right) \n\]\n\nFor \( j \neq k \)\n\n\[ \n\mathop{\prod }\limits_{{i \neq k}}^{q}{\phi }_{i}{\left( A\right) }^{{m}_{i}}{\phi }_{k}{\left( A\right) }^{{r}_{k}}{v}_{j} = \mathop{\prod }\limits_{{i \neq k, j}}^{q}{\phi }_{i}{\left( A\right) }^{{m}_{i}}{\phi }_{k}{\left( A\right) }^{{r}_{k}}{\phi }_{j}{\left( A\right) }^{{m}_{j}}{v}_{j} = 0 \n\]\n\nIf \( j = k \) ,\n\n\[ \n\mathop{\prod }\limits_{{i \neq k}}^{q}{\phi }_{i}{\left( A\right) }^{{m}_{i}}{\phi }_{k}{\left( A\right) }^{{r}_{k}}{v}_{k} = 0 \n\]\n\nwhich shows \( {p}^{\prime }\left( \lambda \right) \) is a monic polynomial having smaller degree than \( p\left( \lambda \right) \) such that \( {p}^{\prime }\left( A\right) = \) 0 . Thus the minimal polynomial for \( {A}_{k} \) is \( {\phi }_{k}{\left( \lambda \right) }^{{m}_{k}} \) as claimed.
Yes
Theorem 10.2.6 Suppose \( V \) is a vector space with field of scalars \( \mathbb{F} \) and \( A \in \mathcal{L}\left( {V, V}\right) \) . Suppose also\n\n\[ V = {V}_{1} \oplus \cdots \oplus {V}_{q} \]\n\nwhere each \( {V}_{k} \) is \( A \) invariant. \( \left( {A{V}_{k} \subseteq {V}_{k}}\right) \) Also let \( {\beta }_{k} \) be a basis for \( {V}_{k} \) and let \( {A}_{k} \) denote the restriction of \( A \) to \( {V}_{k} \) . Letting \( {M}^{k} \) denote the matrix of \( {A}_{k} \) with respect to this basis, it follows the matrix of \( A \) with respect to the basis \( \left\{ {{\beta }_{1},\cdots ,{\beta }_{q}}\right\} \) is\n\n\[ \left( \begin{matrix} {M}^{1} & & 0 \\ & \ddots & \\ 0 & & {M}^{q} \end{matrix}\right) \]
Proof: Recall the matrix \( M \) of a linear transformation \( A \) is defined such that the following diagram commutes.\n\n\[ \left\{ {{v}_{1},\cdots ,{v}_{n}}\right\} \;\begin{matrix} A \\ V \\ q \uparrow \\ {\mathbb{F}}^{n} \end{matrix}\;\begin{matrix} V \\ \rightarrow \\ \circ \\ \uparrow \\ M \end{matrix}\;\begin{matrix} V \\ V \\ \uparrow \\ {\mathbb{F}}^{n} \end{matrix}\;\left\{ {{v}_{1},\cdots ,{v}_{n}}\right\} \]\n\nwhere\n\n\[ q\left( \mathbf{x}\right) \equiv \mathop{\sum }\limits_{{i = 1}}^{n}{x}_{i}{v}_{i} \]\n\nand \( \left\{ {{v}_{1},\cdots ,{v}_{n}}\right\} \) is a basis for \( V \) . Now when \( V = {V}_{1} \oplus \cdots \oplus {V}_{q} \) each \( {V}_{k} \) being invariant with respect to the linear transformation \( A \), and \( {\beta }_{k} \) a basis for \( {V}_{k},{\beta }_{k} = \left\{ {{v}_{1}^{k},\cdots ,{v}_{{m}_{k}}^{k}}\right\} \), one can consider the matrix \( {M}^{k} \) of \( {A}_{k} \) taken with respect to the basis \( {\beta }_{k} \) where \( {A}_{k} \) is the restriction of \( A \) to \( {V}_{k} \) . Then the claim of the theorem is true because if \( M \) is given as described it causes the diagram to commute. To see this, let \( \mathbf{x} \in {\mathbb{F}}^{{m}_{k}} \) .\n\n\[ q\left( \begin{matrix} {M}^{1} & & & & 0 \\ & \ddots & & & \\ & & {M}^{k} & & \\ & & & \ddots & \\ 0 & & & & {M}^{q} \end{matrix}\right) \left( \begin{matrix} \mathbf{0} \\ \vdots \\ \mathbf{x} \\ \vdots \\ \mathbf{0} \end{matrix}\right) = q\left( \begin{matrix} \mathbf{0} \\ \vdots \\ {M}^{k}\mathbf{x} \\ \vdots \\ \mathbf{0} \end{matrix}\right) \equiv \mathop{\sum }\limits_{{ij}}{M}_{ij}^{k}{x}_{j}{v}_{i}^{k} \]\n\nwhile\n\n\[ {Aq}\left( \begin{matrix} \mathbf{0} \\ \vdots \\ \mathbf{x} \\ \vdots \\ \mathbf{0} \end{matrix}\right) \equiv A\mathop{\sum }\limits_{j}{x}_{j}{v}_{j}^{k} = \mathop{\sum }\limits_{j}{x}_{j}{A}_{k}{v}_{j}^{k} = \mathop{\sum }\limits_{j}{x}_{j}\mathop{\sum }\limits_{i}{M}_{ij}^{k}{v}_{i}^{k} \]\n\nbecause, as discussed earlier, \( A{v}_{j}^{k} = \mathop{\sum }\limits_{i}{M}_{ij}^{k}{v}_{i}^{k} \) because \( {M}^{k} \) is the matrix of \( {A}_{k} \) with respect to the basis \( {\beta }_{k} \) . ∎
Yes
Lemma 10.3.2 Let \( W \) be an \( A \) invariant \( \left( {{AW} \subseteq W}\right) \) subspace of \( \ker \left( {\phi {\left( A\right) }^{m}}\right) \) for \( m \) a positive integer where \( \phi \left( \lambda \right) \) is an irreducible monic polynomial of degree \( d \) . Then if \( \eta \left( \lambda \right) \) is a monic polynomial of smallest degree such that for\n\n\[ x \in \ker \left( {\phi {\left( A\right) }^{m}}\right) \smallsetminus \{ 0\} \]\n\n\[ \eta \left( A\right) x = 0, \]\n\nthen\n\n\[ \eta \left( \lambda \right) = \phi {\left( \lambda \right) }^{k} \]\n\nfor some positive integer \( k \) . Thus if \( r \) is the degree of \( \eta \), then \( r = {kd} \) . Also, for a cyclic set,\n\n\[ {\beta }_{x} \equiv \left\{ {x,{Ax},\cdots ,{A}^{r - 1}x}\right\} \]\n\nis linearly independent. Recall that \( r \) is the smallest such that \( {A}^{r}x \) is a linear combination of \( \{ x,{Ax},\cdots ,{A}^{r - 1}x\} \) .
Proof: Consider the first claim. If \( \eta \left( A\right) x = 0 \), then writing\n\n\[ \phi {\left( \lambda \right) }^{m} = \eta \left( \lambda \right) g\left( \lambda \right) + r\left( \lambda \right) \]\n\nwhere either \( r\left( \lambda \right) = 0 \) or the degree of \( r\left( \lambda \right) \) is less than that of \( \eta \left( \lambda \right) \), the latter possibility cannot occur because if it did, \( r\left( A\right) x = 0 \) and this would contradict the definition of \( \eta \left( \lambda \right) \) . Therefore \( r\left( \lambda \right) = 0 \) and so \( \eta \left( \lambda \right) \) divides \( \phi {\left( \lambda \right) }^{m} \) . From Corollary 8.3.11,\n\n\[ \eta \left( \lambda \right) = \phi {\left( \lambda \right) }^{k} \]\n\nfor some integer, \( k \leq m \) . Since \( x \neq 0 \), it follows \( k > 0 \) . In particular, the degree of \( \eta \left( \lambda \right) \) equals \( {kd} \) .\n\nNow consider \( x \neq 0, x \in \ker \left( {\phi {\left( A\right) }^{m}}\right) \) and the vectors \( {\beta }_{x} \) . Do these vectors yield a linearly independent set? The vectors are \( \left\{ {x,{Ax},{A}^{2}x,\cdots ,{A}^{r - 1}x}\right\} \) where \( {A}^{r}x \) is in\n\n\[ \operatorname{span}\left( {x,{Ax},{A}^{2}x,\cdots ,{A}^{r - 1}x}\right) \]\n\nand \( r \) is as small as possible for this to happen. Suppose then that there are scalars \( {d}_{j} \), not all zero such that\n\n\[ \mathop{\sum }\limits_{{j = 0}}^{{r - 1}}{d}_{j}{A}^{j}x = 0, x \neq 0 \]\n\n(10.3)\n\nSuppose \( m \) is the largest nonzero scalar in the above linear combination. \( {d}_{m} \neq 0, m \leq r - 1 \) . Then \( {A}^{m}x \) is a linear combination of the preceeding vectors in the list, which contradicts the definition of \( r \) . Thus from the first part, \( r = {kd} \) for some positive integer \( k \) .
Yes
Lemma 10.3.3 Let \( V \) be a vector space and let \( B \in \mathcal{L}\left( {V, V}\right) \) . Then\n\n\[ V = B\left( V\right) \oplus \ker \left( B\right) \]
Proof: Let \( \left\{ {B{v}_{1},\cdots, B{v}_{r}}\right\} \) be a basis for \( B\left( V\right) \) . Now let \( \left\{ {{w}_{1},\cdots ,{w}_{s}}\right\} \) be a basis for \( \ker \left( B\right) \) . Then if \( v \in V \), there exist unique scalars \( {c}_{i} \) such that\n\n\[ {Bv} = \mathop{\sum }\limits_{{i = 1}}^{r}{c}_{i}B{v}_{i} \]\n\nand so \( B\left( {v - \mathop{\sum }\limits_{{i = 1}}^{r}{c}_{i}{v}_{i}}\right) = 0 \) and so there exist unique scalars \( {d}_{i} \) such that\n\n\[ v - \mathop{\sum }\limits_{{i = 1}}^{r}{c}_{i}{v}_{i} = \mathop{\sum }\limits_{{j = 1}}^{s}{d}_{j}{w}_{j} \]\n\nIt remains to verify that \( \left\{ {{v}_{1},\cdots ,{v}_{r},{w}_{1},\cdots ,{w}_{s}}\right\} \) is linearly independent. Suppose then that\n\n\[ \mathop{\sum }\limits_{i}{a}_{i}{v}_{i} + \mathop{\sum }\limits_{j}{b}_{j}{w}_{j} = 0 \]\n\nDo \( B \) to both sides. This yields \( \mathop{\sum }\limits_{i}{a}_{i}B{v}_{i} = 0 \) and by assumption, this requires each \( {a}_{i} = 0 \) . Then independence of the \( {w}_{i} \) yields each \( {b}_{j} = 0 \) . ∎
Yes
Theorem 10.3.4 Let \( V = \ker \left( {\phi {\left( A\right) }^{m}}\right) \) for \( m \) a positive integer and \( A \in \mathcal{L}\left( {Z, Z}\right) \) where \( Z \) is some vector space containing \( V \), and \( \phi \left( \lambda \right) \) is an irreducible monic polynomial over the field of scalars. Then there exist vectors \( \left\{ {{v}_{1},\cdots ,{v}_{s}}\right\} \) and \( A \) cyclic sets \( {\beta }_{{v}_{j}} \) such that \( \left\{ {{\beta }_{{v}_{1}},\cdots ,{\beta }_{{v}_{s}}}\right\} \) is a basis for \( V \) .
Proof: First suppose \( m = 1 \) . Then in Lemma 10.3.2 you can let \( W = \{ 0\} \) and \( U = \) \( V = \ker \left( {\phi \left( A\right) }\right) \) . Then by this lemma, there exist \( {v}_{1},{v}_{2},\cdots ,{v}_{s} \) such that \( \left\{ {{\beta }_{{v}_{1}},\cdots ,{\beta }_{{v}_{s}}}\right\} \) is a basis for \( V \) . Suppose then that the theorem is true whenever \( V = \ker \left( {\phi {\left( A\right) }^{m - 1}}\right), m \geq 2 \) .\n\nSuppose \( V = \ker \left( {\phi {\left( A\right) }^{m}}\right) \) . Then \( \phi {\left( A\right) }^{m - 1} \) maps \( V \) to \( V \) and so by Lemma 110.3.3,\n\n\[ V = \ker \left( {\phi {\left( A\right) }^{m - 1}}\right) + \phi {\left( A\right) }^{m - 1}\left( V\right) \]\n\nClearly \( \phi {\left( A\right) }^{m - 1}\left( V\right) \subseteq \ker \left( {\phi \left( A\right) }\right) \) . Is \( \phi {\left( A\right) }^{m - 1}\left( V\right) \) also \( A \) invariant? Yes, this is the case because if \( y \in V = \ker \left( {\phi {\left( A\right) }^{m}}\right) \), then \( \phi {\left( A\right) }^{m - 1}y \) is a typical thing in \( \phi {\left( A\right) }^{m - 1}\left( V\right) \) . But\n\n\[ {A\phi }{\left( A\right) }^{m - 1}\left( y\right) = \phi {\left( A\right) }^{m - 1}\left( {Ay}\right) \in \phi {\left( A\right) }^{m - 1}\left( V\right) \]\n\nBy induction, there exists a basis for \( \ker \left( {\phi {\left( A\right) }^{m - 1}}\right) \) which is of the form\n\n\[ \left\{ {{\beta }_{{v}_{1}},\cdots ,{\beta }_{{v}_{r}}}\right\} \]\n\nand now, by Lemma 10.3.2, there exists a basis\n\n\[ \left\{ {{\beta }_{{x}_{1}},\cdots ,{\beta }_{{x}_{l}},{\beta }_{{v}_{1}},\cdots ,{\beta }_{{v}_{r}}}\right\} \]\n\nfor \( V = \ker \left( {\phi {\left( A\right) }^{m - 1}}\right) + \phi {\left( A\right) }^{m - 1}\left( V\right) \) . ∎
Yes
Lemma 10.4.2 Suppose \( {N}^{k}x \neq 0 \) . Then \( \left\{ {x,{Nx},\cdots ,{N}^{k}x}\right\} \) is linearly independent. Also, the minimal polynomial of \( N \) is \( {\lambda }^{m} \) where \( m \) is the first such that \( {N}^{m} = 0 \) .
Proof: Suppose \( \mathop{\sum }\limits_{{i = 0}}^{k}{c}_{i}{N}^{i}x = 0 \) . There exists \( l \) such that \( k \leq l < m \) and \( {N}^{l + 1}x = 0 \) but \( {N}^{l}x \neq 0 \) . Then multiply both sides by \( {N}^{l} \) to conclude that \( {c}_{0} = 0 \) . Next multiply both sides by \( {N}^{l - 1} \) to conclude that \( {c}_{1} = 0 \) and continue this way to obtain that all the \( {c}_{i} = 0 \) .
No
Corollary 10.4.5 Let \( J,{J}^{\prime } \) both be matrices of the nilpotent linear transformation \( N \in \) \( \mathcal{L}\left( {W, W}\right) \) which are of the form described in Proposition 10.4.4. Then \( J = {J}^{\prime } \) . In fact, if the rank of \( {J}^{k} \) equals the rank of \( {J}^{\prime k} \) for all nonnegative integers \( k \), then \( J = {J}^{\prime } \) .
Proof: Since \( J \) and \( {J}^{\prime } \) are similar, it follows that for each \( k \) an integer, \( {J}^{k} \) and \( {J}^{\prime k} \) are similar. Hence, for each \( k \), these matrices have the same rank. Now suppose \( J \neq {J}^{\prime } \) . Note first that\n\n\[ \n{J}_{r}{\left( 0\right) }^{r} = 0,{J}_{r}{\left( 0\right) }^{r - 1} \neq 0.\n\]\n\nDenote the blocks of \( J \) as \( {J}_{{r}_{k}}\left( 0\right) \) and the blocks of \( {J}^{\prime } \) as \( {J}_{{r}_{k}^{\prime }}\left( 0\right) \) . Let \( k \) be the first such that \( {J}_{{r}_{k}}\left( 0\right) \neq {J}_{{r}_{k}^{\prime }}\left( 0\right) \) . Suppose that \( {r}_{k} > {r}_{k}^{\prime } \) . By block multiplication and the above observation, it follows that the two matrices \( {J}^{{r}_{k} - 1} \) and \( {J}^{\prime {r}_{k} - 1} \) are respectively of the forms\n\n\[ \n\left( \begin{matrix} {M}_{{r}_{1}} & & & & & 0 \\ & \ddots & & & & \\ & & {M}_{{r}_{k}} & & & \\ & & & 0 & & \\ & & & & \ddots & \\ 0 & & & & & 0 \end{matrix}\right)\n\]\n\nand\n\n\[ \n\left( \begin{matrix} {M}_{{r}_{1}^{\prime }} & & & & & 0 \\ & \ddots & & & & \\ & & {M}_{{r}_{k}^{\prime }} & & & \\ & & & 0 & & \\ & & & & \ddots & \\ 0 & & & & & 0 \end{matrix}\right)\n\]\n\nwhere \( {M}_{{r}_{j}} = {M}_{{r}_{j}^{\prime }} \) for \( j \leq k - 1 \) but \( {M}_{{r}_{k}^{\prime }} \) is a zero \( {r}_{k}^{\prime } \times {r}_{k}^{\prime } \) matrix while \( {M}_{{r}_{k}} \) is a larger matrix which is not equal to 0 . For example,\n\n\[ \n{M}_{{r}_{k}} = \left( \begin{matrix} 0 & \cdots & 1 \\ & \ddots & \vdots \\ 0 & & 0 \end{matrix}\right)\n\]\n\nThus there are more pivot columns in \( {J}^{{r}_{k} - 1} \) than in \( {\left( {J}^{\prime }\right) }^{{r}_{k} - 1} \), contradicting the requirement that \( {J}^{k} \) and \( {J}^{\prime k} \) have the same rank.
Yes
Proposition 10.5.1 Let the minimal polynomial of \( A \in \mathcal{L}\left( {V, V}\right) \) be given by\n\n\[ p\left( \lambda \right) = \mathop{\prod }\limits_{{k = 1}}^{r}{\left( \lambda - {\lambda }_{k}\right) }^{{m}_{k}} \]\n\nThen the eigenvalues of \( A \) are \( \left\{ {{\lambda }_{1},\cdots ,{\lambda }_{r}}\right\} \) .
It follows from Corollary 10.2.4 that\n\n\[ V = \ker {\left( A - {\lambda }_{1}I\right) }^{{m}_{1}} \oplus \cdots \oplus \ker {\left( A - {\lambda }_{r}I\right) }^{{m}_{r}} \]\n\n\[ \equiv \;{V}_{1} \oplus \cdots \oplus {V}_{r} \]\n\nwhere \( I \) denotes the identity linear transformation. Without loss of generality, let the dimensions of the \( {V}_{k} \) be decreasing from left to right. These \( {V}_{k} \) are called the generalized eigenspaces.\n\nIt follows from the definition of \( {V}_{k} \) that \( \left( {A - {\lambda }_{k}I}\right) \) is nilpotent on \( {V}_{k} \) and clearly each \( {V}_{k} \) is \( A \) invariant. Therefore from Proposition 10.4.4, and letting \( {A}_{k} \) denote the restriction of \( A \) to \( {V}_{k} \), there exists an ordered basis for \( {V}_{k},{\beta }_{k} \) such that with respect to this basis, the matrix of \( \left( {{A}_{k} - {\lambda }_{k}I}\right) \) is of the form given in that proposition, denoted here by \( {J}^{k} \). What is the matrix of \( {A}_{k} \) with respect to \( {\beta }_{k} \) ? Letting \( \left\{ {{b}_{1},\cdots ,{b}_{r}}\right\} = {\beta }_{k} \),\n\n\[ {A}_{k}{b}_{j} = \left( {{A}_{k} - {\lambda }_{k}I}\right) {b}_{j} + {\lambda }_{k}I{b}_{j} \equiv \mathop{\sum }\limits_{s}{J}_{sj}^{k}{b}_{s} + \mathop{\sum }\limits_{s}{\lambda }_{k}{\delta }_{sj}{b}_{s} = \mathop{\sum }\limits_{s}\left( {{J}_{sj}^{k} + {\lambda }_{k}{\delta }_{sj}}\right) {b}_{s} \]\n\nand so the matrix of \( {A}_{k} \) with respect to this basis is\n\n\[ {J}^{k} + {\lambda }_{k}I \]\n\nwhere \( I \) is the identity matrix. Therefore, with respect to the ordered basis \( \left\{ {{\beta }_{1},\cdots ,{\beta }_{r}}\right\} \) the matrix of \( A \) is in Jordan canonical form. This means the matrix is of the form\n\n\[ \left( \begin{matrix} J\left( {\lambda }_{1}\right) & & 0 \\ & \ddots & \\ 0 & & J\left( {\lambda }_{r}\right) \end{matrix}\right) \]\n\n(10.5)\n\nwhere \( J\left( {\lambda }_{k}\right) \) is an \( {m}_{k} \times {m}_{k} \) matrix of the form\n\n\[ \left( \begin{matrix} {J}_{{k}_{1}}\left( {\lambda }_{k}\right) & & & 0 \\ & {J}_{{k}_{2}}\left( {\lambda }_{k}\right) & & \\ & & \ddots & \\ 0 & & & {J}_{{k}_{r}}\left( {\lambda }_{k}\right) \end{matrix}\right) \]\n\n\( \left( {10.6}\right) \)\n\nwhere \( {k}_{1} \geq {k}_{2} \geq \cdots \geq {k}_{r} \geq 1 \) and \( \mathop{\sum }\limits_{{i = 1}}^{r}{k}_{i} = {m}_{k} \). Here \( {J}_{k}\left( \lambda \right) \) is a \( k \times k \) Jordan block of the form\n\n\[ \left( \begin{matrix} \lambda & 1 & & 0 \\ 0 & \lambda & \ddots & \\ & \ddots & \ddots & 1 \\ 0 & & 0 & \lambda \end{matrix}\right) \]\n\n(10.7)\n\nThis proves the existence part of the following fundamental theorem.
Yes
Lemma 10.5.3 Suppose \( J \) is of the form \( {J}_{s} \) described above in [10.8] where the constant \( \alpha \), on the main diagonal is less than one in absolute value. Then\n\n\[ \mathop{\lim }\limits_{{k \rightarrow \infty }}{\left( {J}^{k}\right) }_{ij} = 0 \]
Proof: From (10.9), it follows that for large \( k \), and \( j \leq {m}_{s} \), \n\n\[ \left( \begin{array}{l} k \\ j \end{array}\right) \leq \frac{k\left( {k - 1}\right) \cdots \left( {k - {m}_{s} + 1}\right) }{{m}_{s}!}. \]\n\nTherefore, letting \( C \) be the largest value of \( \left| {\left( {N}^{j}\right) }_{pq}\right| \) for \( 0 \leq j \leq {m}_{s} \), \n\n\[ \left| {\left( {J}^{k}\right) }_{pq}\right| \leq {m}_{s}C\left( \frac{k\left( {k - 1}\right) \cdots \left( {k - {m}_{s} + 1}\right) }{{m}_{s}!}\right) {\left| \alpha \right| }^{k - {m}_{s}} \]\n\nwhich converges to zero as \( k \rightarrow \infty \). This is most easily seen by applying the ratio test to\n\nthe series\n\[ \mathop{\sum }\limits_{{k = {m}_{s}}}^{\infty }\left( \frac{k\left( {k - 1}\right) \cdots \left( {k - {m}_{s} + 1}\right) }{{m}_{s}!}\right) {\left| \alpha \right| }^{k - {m}_{s}} \]\n\nand then noting that if a series converges, then the \( {k}^{th} \) term converges to zero. I
Yes
Proposition 10.7.2 Let \( q\left( \lambda \right) \) be a polynomial and let \( C\left( {q\left( \lambda \right) }\right) \) be its companion matrix. Then \( q\left( {C\left( {q\left( \lambda \right) }\right) }\right) = 0 \) .
Proof: Write \( C \) instead of \( C\left( {q\left( \lambda \right) }\right) \) for short. Note that\n\n\[ C{\mathbf{e}}_{1} = {\mathbf{e}}_{2}, C{\mathbf{e}}_{2} = {\mathbf{e}}_{3},\cdots, C{\mathbf{e}}_{n - 1} = {\mathbf{e}}_{n} \]\n\nThus\n\n\[ {\mathbf{e}}_{k} = {C}^{k - 1}{\mathbf{e}}_{1}, k = 1,\cdots, n \]\n\n\( \left( {10.11}\right) \)\n\nand so it follows\n\n\[ \left\{ {{\mathbf{e}}_{1}, C{\mathbf{e}}_{1},{C}^{2}{\mathbf{e}}_{1},\cdots ,{C}^{n - 1}{\mathbf{e}}_{1}}\right\} \]\n\n(10.12)\n\nare linearly independent. Hence these form a basis for \( {\mathbb{F}}^{n} \) . Now note that \( C{\mathbf{e}}_{n} \) is given by\n\n\[ C{\mathbf{e}}_{n} = - {a}_{0}{\mathbf{e}}_{1} - {a}_{1}{\mathbf{e}}_{2} - \cdots - {\mathbf{a}}_{n - 1}{\mathbf{e}}_{n} \]\n\nand from (10.11) this implies\n\n\[ {C}^{n}{\mathbf{e}}_{1} = - {a}_{0}{\mathbf{e}}_{1} - {a}_{1}C{\mathbf{e}}_{1} - \cdots - {\mathbf{a}}_{n - 1}{C}^{n - 1}{\mathbf{e}}_{1} \]\n\nand so\n\n\[ q\left( C\right) {\mathbf{e}}_{1} = \mathbf{0}. \]\n\nNow since 10.12 is a basis, every vector of \( {\mathbb{F}}^{n} \) is of the form \( k\left( C\right) {\mathbf{e}}_{1} \) for some polynomial \( k\left( \lambda \right) \) . Therefore, if \( \mathbf{v} \in {\mathbb{F}}^{n} \),\n\n\[ q\left( C\right) \mathbf{v} = q\left( C\right) k\left( C\right) {\mathbf{e}}_{1} = k\left( C\right) q\left( C\right) {\mathbf{e}}_{1} = \mathbf{0} \]\n\nwhich shows \( q\left( C\right) = 0 \). ∎
Yes
Theorem 10.7.3 Let \( A \in \mathcal{L}\left( {V, V}\right) \) where \( V \) is a vector space with field of scalars \( \mathbb{F} \) and minimal polynomial \[ \mathop{\prod }\limits_{{i = 1}}^{q}{\phi }_{i}{\left( \lambda \right) }^{{m}_{i}} \] where each \( {\phi }_{i}\left( \lambda \right) \) is irreducible. Letting \( {V}_{k} \equiv \ker \left( {{\phi }_{k}{\left( \lambda \right) }^{{m}_{k}}}\right) \), it follows \[ V = {V}_{1} \oplus \cdots \oplus {V}_{q} \] where each \( {V}_{k} \) is \( A \) invariant. Letting \( {B}_{k} \) denote a basis for \( {V}_{k} \) and \( {M}^{k} \) the matrix of the restriction of \( A \) to \( {V}_{k} \), it follows that the matrix of \( A \) with respect to the basis \( \left\{ {{B}_{1},\cdots ,{B}_{q}}\right\} \) is the block diagonal matrix of the form \[ \left( \begin{matrix} {M}^{1} & & 0 \\ & \ddots & \\ 0 & & {M}^{q} \end{matrix}\right) \] \( \left( {10.13}\right) \) If \( {B}_{k} \) is given as \( \left\{ {{\beta }_{{v}_{1}},\cdots ,{\beta }_{{v}_{s}}}\right\} \) as described in Theorem 10.3.4 where each \( {\beta }_{{v}_{j}} \) is an \( A \) cyclic set of vectors, then the matrix \( {M}^{k} \) is of the form \[ {M}^{k} = \left( \begin{matrix} C\left( {{\phi }_{k}{\left( \lambda \right) }^{{r}_{1}}}\right) & & 0 \\ & \ddots & \\ 0 & & C\left( {{\phi }_{k}{\left( \lambda \right) }^{{r}_{s}}}\right) \end{matrix}\right) \] (10.14) where the \( A \) cyclic sets of vectors may be arranged in order such that the positive integers \( {r}_{j} \) satisfy \( {r}_{1} \geq \cdots \geq {r}_{s} \) and \( C\left( {{\phi }_{k}{\left( \lambda \right) }^{{r}_{j}}}\right) \) is the companion matrix of the polynomial \( {\phi }_{k}{\left( \lambda \right) }^{{r}_{j}} \) .
Proof: By Theorem 10.2.6 the matrix of \( A \) with respect to \( \left\{ {{B}_{1},\cdots ,{B}_{q}}\right\} \) is of the form given in (10.13). Now by Theorem 10.3.4 the basis \( {B}_{k} \) may be chosen in the form \( \left\{ {{\beta }_{{v}_{1}},\cdots ,{\beta }_{{v}_{s}}}\right\} \) where each \( {\beta }_{{v}_{k}} \) is an \( A \) cyclic set of vectors and also it can be assumed the lengths of these \( {\beta }_{{v}_{k}} \) are decreasing. Thus \[ {V}_{k} = \operatorname{span}\left( {\beta }_{{v}_{1}}\right) \oplus \cdots \oplus \operatorname{span}\left( {\beta }_{{v}_{s}}\right) \] and it only remains to consider the matrix of \( A \) restricted to \( \operatorname{span}\left( {\beta }_{{v}_{k}}\right) \) . Then you can apply Theorem 10.2.6 to get the result in (10.14). Say \[ {\beta }_{{v}_{k}} = {v}_{k}, A{v}_{k},\cdots ,{A}^{d - 1}{v}_{k} \] where \( \eta \left( A\right) {v}_{k} = 0 \) and the degree of \( \eta \left( \lambda \right) \) is \( d \), the smallest degree such that this is so, \( \eta \) being a monic polynomial. Then by Corollary 8.3.11, \( \eta \left( \lambda \right) = {\phi }_{k}{\left( \lambda \right) }^{{r}_{k}} \) where \( {r}_{k} \leq {m}_{k} \) . Now \[ A\left( {\operatorname{span}\left( {\beta }_{{v}_{k}}\right) }\right) \subseteq \operatorname{span}\left( {\beta }_{{v}_{k}}\right) \] because \( {A}^{d}{v}_{k} \) is in \( \operatorname{span}\left( {{v}_{k}, A{v}_{k},\cdots ,{A}^{d - 1}{v}_{k}}\right) \) . It remains to consider the matrix of \( A \) restricted to \( \operatorname{span}\left( {\beta }_{{v}_{k}}\right) \) . Say \[ \eta \left( \lambda \right) = {\phi }_{k}{\left( \lambda \right) }^{{r}_{k}} = {a}_{0} + {a}_{1}\lambda + \cdots + {a}_{d - 1}{\lambda }^{d - 1} + {\lambda }^{d} \] Thus \[ {A}^{d}{v}_{k} = - {a}_{0}{v}_{k} - {a}_{1}A{v}_{k} - \cdots - {a}_{d - 1}{A}^{d - 1}{v}_{k} \] Recall the formalism for finding the matrix of \( A \) restricted to this invariant subspace. \[ \left( \begin{array}{lllll} A{v}_{k} & {A}^{2}{v}_{k} & {A}^{3}{v}_{k} & \cdots & - {a}_{0}{v}_{k} - {a}_{1}A{v}_{k} - \cdots - {a}_{d - 1}{A}^{d - 1}{v}_{k} \end{array}\right) = \] \[ \left( \begin{array}{lllll} {v}_{k} & A{v}_{k} & {A}^{2}{v}_{k} & \cdots & {A}^{d - 1}{v}_{k} \end{array}\right) \le
Yes
Theorem 10.8.4 Let \( V \) be a vector space having field of scalars \( \mathbb{F} \) and let \( A \in \mathcal{L}\left( {V, V}\right) \) . Then the rational canonical form of \( A \) is unique up to order of the blocks.
Proof: Let the minimal polynomial of \( A \) be \( \mathop{\prod }\limits_{{k = 1}}^{q}{\phi }_{k}{\left( \lambda \right) }^{{m}_{k}} \) . Then recall from Corollary 10.2.4\n\n\[ V = {V}_{1} \oplus \cdots \oplus {V}_{q} \]\n\nwhere \( {V}_{k} = \ker \left( {{\phi }_{k}{\left( A\right) }^{{m}_{k}}}\right) \) . Also recall from Corollary 10.2.5 that the minimal polynomial of the restriction of \( A \) to \( {V}_{k} \) is \( {\phi }_{k}{\left( \lambda \right) }^{{m}_{k}} \) . Now apply Lemma 10.8.3 to \( A \) restricted to \( {V}_{k} \) . \( \blacksquare \)
No
Find a similarity transformation which will produce the rational canonical form for \( A \) .
The characteristic polynomial is \( {\lambda }^{3} - {24}{\lambda }^{2} + {180\lambda } - {432} \) . This factors as\n\n\[ {\left( \lambda - 6\right) }^{2}\left( {\lambda - {12}}\right) \]\n\nIt turns out this is also the minimal polynomial. You can see this by plugging in \( A \) where you see \( \lambda \) and observing things don’t work if you delete one of the \( \lambda - 6 \) factors. There is more on this in the exercises. It turns out you can compute the minimal polynomial pretty easily. Thus \( {\mathbb{Q}}^{3} \) is the direct sum of \( \ker \left( {\left( A - 6I\right) }^{2}\right) \) and \( \ker \left( {A - {12I}}\right) \) . Consider the first of these. You see easily that this is\n\n\[ y\left( \begin{array}{l} 1 \\ 1 \\ 0 \end{array}\right) + z\left( \begin{matrix} - 1 \\ 0 \\ 1 \end{matrix}\right), y, z \in \mathbb{Q} \]\n\nWhat about the length of \( A \) cyclic sets? It turns out it doesn’t matter much. You can start with either of these and get a cycle of length 2 . Lets pick the second one. This leads to the\n\ncycle\n\[ \left( \begin{matrix} - 1 \\ 0 \\ 1 \end{matrix}\right) ,\left( \begin{matrix} - 4 \\ - 4 \\ 0 \end{matrix}\right) = A\left( \begin{matrix} - 1 \\ 0 \\ 1 \end{matrix}\right) ,\left( \begin{matrix} - {12} \\ - {48} \\ - {36} \end{matrix}\right) = {A}^{2}\left( \begin{matrix} - 1 \\ 0 \\ 1 \end{matrix}\right) \]\n\nwhere the last of the three is a linear combination of the first two. Take the first two as the first two columns of \( S \) . To get the third, you need a cycle of length 1 corresponding to \( \ker \left( {A - {12I}}\right) \) . This yields the eigenvector \( {\left( \begin{array}{lll} 1 & - 2 & 3 \end{array}\right) }^{T} \) . Thus\n\n\[ S = \left( \begin{matrix} - 1 & - 4 & 1 \\ 0 & - 4 & - 2 \\ 1 & 0 & 3 \end{matrix}\right) \]\n\nNow using Proposition 9.3.10, the Rational canonical form for \( A \) should be\n\n\[ {\left( \begin{matrix} - 1 & - 4 & 1 \\ 0 & - 4 & - 2 \\ 1 & 0 & 3 \end{matrix}\right) }^{-1}\left( \begin{matrix} 5 & - 2 & 1 \\ 2 & {10} & - 2 \\ 9 & 0 & 9 \end{matrix}\right) \left( \begin{matrix} - 1 & - 4 & 1 \\ 0 & - 4 & - 2 \\ 1 & 0 & 3 \end{matrix}\right) = \left( \begin{matrix} 0 & - {36} & 0 \\ 1 & {12} & 0 \\ 0 & 0 & {12} \end{matrix}\right) \]
No
Lemma 11.1.2 The property of being a stochastic matrix is preserved by taking products.
Proof: Suppose the sum over a row equals 1 for \( A \) and \( B \) . Then letting the entries be denoted by \( \left( {a}_{ij}\right) \) and \( \left( {b}_{ij}\right) \) respectively,\n\n\[ \mathop{\sum }\limits_{i}\mathop{\sum }\limits_{k}{a}_{ik}{b}_{kj} = \mathop{\sum }\limits_{k}\left( {\mathop{\sum }\limits_{i}{a}_{ik}}\right) {b}_{kj} = \mathop{\sum }\limits_{k}{b}_{kj} = 1. \]\n\nA similar argument yields the same result in the case where it is the sum over a column which is equal to 1 . It is obvious that when the product is taken, if each \( {a}_{ij},{b}_{ij} \geq 0 \), then the same will be true of sums of products of these numbers.
Yes
Theorem 11.1.3 Let \( A \) be a real \( p \times p \) matrix having the properties\n\n1. \( {a}_{ij} \geq 0 \)\n\n2. Either \( \mathop{\sum }\limits_{{i = 1}}^{p}{a}_{ij} = 1 \) or \( \mathop{\sum }\limits_{{j = 1}}^{p}{a}_{ij} = 1 \) .\n\n3. The distinct eigenvalues of \( A \) are \( \left\{ {1,{\lambda }_{2},\ldots ,{\lambda }_{m}}\right\} \) where each \( \left| {\lambda }_{j}\right| < 1 \) .\n\nThen \( \mathop{\lim }\limits_{{n \rightarrow \infty }}{A}^{n} = {A}_{\infty } \) exists in the sense that \( \mathop{\lim }\limits_{{n \rightarrow \infty }}{a}_{ij}^{n} = {a}_{ij}^{\infty } \), the \( i{j}^{\text{th }} \) entry \( {A}_{\infty } \) . Here \( {a}_{ij}^{n} \) denotes the \( i{j}^{\text{th }} \) entry of \( {A}^{n} \) . Also, if \( \lambda = 1 \) has algebraic multiplicity \( r \), then the Jordan block corresponding to \( \lambda = 1 \) is just the \( r \times r \) identity.
Proof. By the existence of the Jordan form for \( A \), it follows that there exists an invertible matrix \( P \) such that\n\n\[ \n{P}^{-1}{AP} = \left( \begin{array}{llll} I + N & & & \\ & {J}_{{r}_{2}}\left( {\lambda }_{2}\right) & & \\ & & \ddots & \\ & & & {J}_{{r}_{m}}\left( {\lambda }_{m}\right) \end{array}\right) = J \n\]\n\nwhere \( I \) is \( r \times r \) for \( r \) the multiplicity of the eigenvalue 1 and \( N \) is a nilpotent matrix for which \( {N}^{r} = 0 \) . I will show that because of Condition \( ▱, N = 0 \) .\n\nFirst of all,\n\n\[ \n{J}_{{r}_{i}}\left( {\lambda }_{i}\right) = {\lambda }_{i}I + {N}_{i} \n\]\n\nwhere \( {N}_{i} \) satisfies \( {N}_{i}^{{r}_{i}} = 0 \) for some \( {r}_{i} > 0 \) . It is clear that \( {N}_{i}\left( {{\lambda }_{i}I}\right) = \left( {{\lambda }_{i}I}\right) N \) and so\n\n\[ \n{\left( {J}_{{r}_{i}}\left( {\lambda }_{i}\right) \right) }^{n} = \mathop{\sum }\limits_{{k = 0}}^{n}\left( \begin{array}{l} n \\ k \end{array}\right) {N}^{k}{\lambda }_{i}^{n - k} = \mathop{\sum }\limits_{{k = 0}}^{r}\left( \begin{array}{l} n \\ k \end{array}\right) {N}^{k}{\lambda }_{i}^{n - k} \n\]\n\nwhich converges to 0 due to the assumption that \( \left| {\lambda }_{i}\right| < 1 \) . There are finitely many terms and a typical one is a matrix whose entries are no larger than an expression of the form\n\n\[ \n{\left| {\lambda }_{i}\right| }^{n - k}{C}_{k}n\left( {n - 1}\right) \cdots \left( {n - k + 1}\right) \leq {C}_{k}{\left| {\lambda }_{i}\right| }^{n - k}{n}^{k} \n\]\n\nwhich converges to 0 because, by the root test, the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{\left| {\lambda }_{i}\right| }^{n - k}{n}^{k} \) converges. Thus for each \( i = 2,\ldots, p \) ,\n\n\[ \n\mathop{\lim }\limits_{{n \rightarrow \infty }}{\left( {J}_{{r}_{i}}\left( {\lambda }_{i}\right) \right) }^{n} = 0. \n\]\n\nBy Condition 2, if \( {a}_{ij}^{n} \) denotes the \( i{j}^{\text{th }} \) entry of \( {A}^{n} \), then either\n\n\[ \n\mathop{\sum }\limits_{{i = 1}}^{p}{a}_{ij}^{n} = 1\text{ or }\mathop{\sum }\limits_{{j = 1}}^{p}{a}_{ij}^{n} = 1,{a}_{ij}^{n} \geq 0. \n\]\n\nThis follows from Lemma III.1.2. It is obvious each \( {a}_{ij}^{n} \geq 0 \), and so the entries of \( {A}^{n} \) must be bounded independent of \( n \) .\n\nIt follows easily from\n\n\[ \n\overset{n\text{ times }}{\overbrace{{P}^{-1}{AP}{P}^{-1}{AP}{P}^{-1}{AP}\cdots {P}^{-1}{AP}}} = {P}^{-1}{A}^{n}P \n\]\n\nthat\n\n\[ \n{P}^{-1}{A}^{n}P = {J}^{n} \n\]\n\n(11.1)\n\nHence \( {J}^{n} \) must also have bounded entries as \( n \rightarrow \infty \) . However, this requirement is incompatible with an assumption that \( N \neq 0 \) .\n\nIf \( N \neq 0 \), then \( {N}^{s} \neq 0 \) but \( {N}^{s + 1} = 0 \) for some \( 1 \leq s \leq r \) . Then\n\n\[ \n{\left( I + N\right) }^{n} = I + \mathop{\sum }\limits_{{k = 1}}^{s}\left( \begin{array}{l} n \\ k \end{array}\right) {N}^{k} \n\]
Yes
Lemma 11.1.4 Suppose \( A = \left( {a}_{ij}\right) \) is a stochastic matrix. Then \( \lambda = 1 \) is an eigenvalue. If \( {a}_{ij} > 0 \) for all \( i, j \), then if \( \mu \) is an eigenvalue of \( A \), either \( \left| \mu \right| < 1 \) or \( \mu = 1 \) . In addition to this, if \( A\mathbf{v} = \mathbf{v} \) for a nonzero vector, \( \mathbf{v} \in {\mathbb{R}}^{n} \), then \( {v}_{j}{v}_{i} \geq 0 \) for all \( i, j \) so the components of \( \mathbf{v} \) have the same sign.
Proof: Suppose the matrix satisfies\n\n\[ \mathop{\sum }\limits_{j}{a}_{ij} = 1 \]\n\nThen if \( \mathbf{v} = {\left( \begin{array}{lll} 1 & \cdots & 1 \end{array}\right) }^{T} \), it is obvious that \( A\mathbf{v} = \mathbf{v} \) . Therefore, this matrix has \( \lambda = 1 \) as an eigenvalue. Suppose then that \( \mu \) is an eigenvalue. Is \( \left| \mu \right| < 1 \) or \( \mu = 1 \) ? Let \( \mathbf{v} \) be an eigenvector and let \( \left| {v}_{i}\right| \) be the largest of the \( \left| {v}_{j}\right| \).\n\n\[ \mu {v}_{i} = \mathop{\sum }\limits_{j}{a}_{ij}{v}_{j} \]\n\nand now multiply both sides by \( \overline{\mu {v}_{i}} \) to obtain\n\n\[ {\left| \mu \right| }^{2}{\left| {v}_{i}\right| }^{2} = \mathop{\sum }\limits_{j}{a}_{ij}{v}_{j}\overline{{v}_{i}\mu } = \mathop{\sum }\limits_{j}{a}_{ij}\operatorname{Re}\left( {{v}_{j}\overline{{v}_{i}\mu }}\right) \]\n\n\[ \leq \mathop{\sum }\limits_{j}{a}_{ij}\left| \mu \right| {\left| {v}_{i}\right| }^{2} = \left| \mu \right| {\left| {v}_{i}\right| }^{2} \]\n\nTherefore, \( \left| \mu \right| \leq 1 \) . If \( \left| \mu \right| = 1 \), then equality must hold in the above, and so \( {v}_{j}\overline{{v}_{i}\mu } \) must be real and nonnegative for each \( j \) . In particular, this holds for \( j = 1 \) which shows \( \bar{\mu } \) and hence \( \mu \) are real. Thus, in this case, \( \mu = 1 \) . The only other case is where \( \left| \mu \right| < 1 \) .\n\nIf instead, \( \mathop{\sum }\limits_{i}{a}_{ij} = 1 \), consider \( {A}^{T} \) . Both \( A \) and \( {A}^{T} \) have the same characteristic polynomial and so their eigenvalues are exactly the same.
Yes
Lemma 11.1.5 Let \( A \) be any Markov matrix and let \( \mathbf{v} \) be a vector having all its components non negative with \( \mathop{\sum }\limits_{i}{v}_{i} = c \) . Then if \( \mathbf{w} = A\mathbf{v} \), it follows that \( {w}_{i} \geq 0 \) for all \( i \) and \( \mathop{\sum }\limits_{i}{w}_{i} = c \) .
Proof: From the definition of \( \mathbf{w} \) ,\n\n\[ \n{w}_{i} \equiv \mathop{\sum }\limits_{j}{a}_{ij}{v}_{j} \geq 0 \n\]\n\nAlso\n\n\[ \n\mathop{\sum }\limits_{i}{w}_{i} = \mathop{\sum }\limits_{i}\mathop{\sum }\limits_{j}{a}_{ij}{v}_{j} = \mathop{\sum }\limits_{j}\mathop{\sum }\limits_{i}{a}_{ij}{v}_{j} = \mathop{\sum }\limits_{j}{v}_{j} = c. \n\]
Yes
Theorem 11.1.6 Suppose \( A \) is a Markov matrix (The sum over a column equals 1) in which \( {a}_{ij} > 0 \) for all \( i, j \) and suppose \( \mathbf{w} \) is a vector. Then for each \( i \) ,\n\n\[ \n\mathop{\lim }\limits_{{k \rightarrow \infty }}{\left( {A}^{k}\mathbf{w}\right) }_{i} = {v}_{i} \n\]\n\nwhere \( A\mathbf{v} = \mathbf{v} \) . In words, \( {A}^{k}\mathbf{w} \) always converges to a steady state. In addition to this, if the vector, \( \mathbf{w} \) satisfies \( {w}_{i} \geq 0 \) for all \( i \) and \( \mathop{\sum }\limits_{i}{w}_{i} = c \), then the vector \( \mathbf{v} \) will also satisfy the conditions, \( {v}_{i} \geq 0,\mathop{\sum }\limits_{i}{v}_{i} = c \) .
Proof: By Lemma 11.1.4, since each \( {a}_{ij} > 0 \), the eigenvalues are either 1 or have absolute value less than 1. Therefore, the claimed limit exists by Theorem III.1.3. The assertion that the components are nonnegative and sum to \( c \) follows from Lemma II.L5. That \( A\mathbf{v} = \mathbf{v} \) follows from\n\n\[ \n\mathbf{v} = \mathop{\lim }\limits_{{n \rightarrow \infty }}{A}^{n}\mathbf{w} = \mathop{\lim }\limits_{{n \rightarrow \infty }}{A}^{n + 1}\mathbf{w} = A\mathop{\lim }\limits_{{n \rightarrow \infty }}{A}^{n}\mathbf{w} = A\mathbf{v}.\blacksquare \n\]
Yes
Corollary 11.1.7 Suppose \( A \) is a regular Markov matrix, on for which the entries of \( {A}^{k} \) are all positive for some \( k \), and suppose \( \mathbf{w} \) is a vector. Then for each \( i \) ,\n\n\[ \mathop{\lim }\limits_{{n \rightarrow \infty }}{\left( {A}^{n}\mathbf{w}\right) }_{i} = {v}_{i} \]\n\nwhere \( A\mathbf{v} = \mathbf{v} \) . In words, \( {A}^{n}\mathbf{w} \) always converges to a steady state. In addition to this, if the vector \( \mathbf{w} \) satisfies \( {w}_{i} \geq 0 \) for all \( i \) and \( \mathop{\sum }\limits_{i}{w}_{i} = c \), Then the vector \( \mathbf{v} \) will also satisfy the conditions \( {v}_{i} \geq 0,\mathop{\sum }\limits_{i}{v}_{i} = c \) .
Proof: Let the entries of \( {A}^{k} \) be all positive. Now suppose that \( {a}_{ij} \geq 0 \) for all \( i, j \) and \( A = \left( {a}_{ij}\right) \) is a transition matrix. Then if \( B = \left( {b}_{ij}\right) \) is a transition matrix with \( {b}_{ij} > 0 \) for all \( {ij} \), it follows that \( {BA} \) is a transition matrix which has strictly positive entries. The \( i{j}^{th} \) entry of \( {BA} \) is\n\n\[ \mathop{\sum }\limits_{k}{b}_{ik}{a}_{kj} > 0 \]\n\nThus, from Lemma 11.1.4, \( {A}^{k} \) has an eigenvalue equal to 1 for all \( k \) sufficiently large, and all the other eigenvalues have absolute value strictly less than 1 . The same must be true of \( A \), for if \( \lambda \) is an eigenvalue of \( A \) with \( \left| \lambda \right| = 1 \), then \( {\lambda }^{k} \) is an eigenvalue for \( {A}^{k} \) and so, for all \( k \) large enough, \( {\lambda }^{k} = 1 \) which is absurd unless \( \lambda = 1 \) . By Theorem 11.1.3, \( \mathop{\lim }\limits_{{n \rightarrow \infty }}{A}^{n}\mathbf{w} \) exists. The rest follows as in Theorem 11.1.6. ∎
Yes
Proposition 11.3.3 Let \( {p}_{ij}^{n} \) denote the probability that \( {X}_{n} \) is in state \( j \) given that \( {X}_{0} \) was in state \( i \) . Then \( {p}_{ij}^{n} \) is the \( i{j}^{\text{th }} \) entry of the matrix \( {P}^{n} \) where \( P = \left( {p}_{ij}\right) \) .
Proof: This is clearly true if \( n = 1 \) and follows from the definition of the \( {p}_{ij} \) . Suppose true for \( n \) . Then the probability that \( {X}_{n + 1} \) is at \( j \) given that \( {X}_{0} \) was at \( i \) equals \( \mathop{\sum }\limits_{k}{p}_{ik}^{n}{p}_{kj} \) because \( {X}_{n} \) must have some value, \( k \), and so this represents all possible ways to go from \( i \) to \( j \) . You can go from \( i \) to 1 in \( n \) steps with probability \( {p}_{i1} \) and then from 1 to \( j \) in one step with probability \( {p}_{1j} \) and so the probability of this is \( {p}_{i1}^{n}{p}_{1j} \) but you can also go from \( i \) to 2 and then from 2 to \( j \) and from \( i \) to 3 and then from 3 to \( j \) etc. Thus the sum of these is just what is given and represents the probability of \( {X}_{n + 1} \) having the value \( j \) given \( {X}_{0} \) has the value \( i \) . \( \blacksquare \)
Yes
Theorem 11.3.4 The eigenvalues of \n\n\\[ \n\\left( \\begin{matrix} 0 & p & 0 & \\cdots & 0 \\\\ q & 0 & p & \\cdots & 0 \\\\ 0 & q & 0 & \\ddots & \\vdots \\\\ \\vdots & 0 & \\ddots & \\ddots & p \\\\ 0 & \\vdots & 0 & q & 0 \\end{matrix}\\right) \n\\]\n\nhave absolute value less than 1. Here \\( p + q = 1 \\) and both \\( p, q > 0 \\) .
Proof: By Gerschgorin’s theorem, if \\( \\lambda \\) is an eigenvalue, then \\( \\left| \\lambda \\right| \\leq 1 \\) . Now suppose \\( \\mathbf{v} \\) is an eigenvector for \\( \\lambda \\) . Then\n\n\\[ \nA\\mathbf{v} = \\left( \\begin{matrix} p{v}_{2} \\\\ q{v}_{1} + p{v}_{3} \\\\ \\vdots \\\\ q{v}_{n - 2} + p{v}_{n} \\\\ q{v}_{n - 1} \\end{matrix}\\right) = \\lambda \\left( \\begin{matrix} {v}_{1} \\\\ {v}_{2} \\\\ \\vdots \\\\ {v}_{n - 1} \\\\ {v}_{n} \\end{matrix}\\right) \n\\]\n\nSuppose \\( \\left| \\lambda \\right| = 1 \\) . Then the top row shows \\( p\\left| {v}_{2}\\right| = \\left| {v}_{1}\\right| \\) so \\( \\left| {v}_{1}\\right| < \\left| {v}_{2}\\right| \\) . Suppose \\( \\left| {v}_{1}\\right| < \\left| {v}_{2}\\right| < \\) \\( \\cdots < \\left| {v}_{k}\\right| \\) for some \\( k < n \\) . Then\n\n\\[ \n\\left| {\\lambda {v}_{k}}\\right| = \\left| {v}_{k}\\right| \\leq q\\left| {v}_{k - 1}\\right| + p\\left| {v}_{k + 1}\\right| < q\\left| {v}_{k}\\right| + p\\left| {v}_{k + 1}\\right| \n\\]\n\nand so subtracting \\( q\\left| {v}_{k}\\right| \\) from both sides,\n\n\\[ \np\\left| {v}_{k}\\right| < p\\left| {v}_{k + 1}\\right| \n\\]\n\nshowing \\( {\\left\\{ \\left| {v}_{k}\\right| \\right\\} }_{k = 1}^{n} \\) is an increasing sequence. Now a contradiction results on the last line which requires \\( \\left| {v}_{n - 1}\\right| > \\left| {v}_{n}\\right| \\) . Therefore, \\( \\left| \\lambda \\right| < 1 \\) for any eigenvalue of the above matrix. ∎
Yes
Corollary 11.3.5 Let \( p, q \) be positive numbers and let \( p + q = 1 \) . The eigenvalues of\n\n\[ \left( \begin{matrix} a & p & 0 & \cdots & 0 \\ q & a & p & \cdots & 0 \\ 0 & q & a & \ddots & \vdots \\ \vdots & 0 & \ddots & \ddots & p \\ 0 & \vdots & 0 & q & a \end{matrix}\right) \]\n\nare all strictly closer than 1 to a. That is, whenever \( \lambda \) is an eigenvalue,\n\n\[ \left| {\lambda - a}\right| < 1 \]
Proof: Let \( A \) be the above matrix and suppose \( A\mathbf{x} = \lambda \mathbf{x} \) . Then letting \( {A}^{\prime } \) denote\n\n\[ \left( \begin{matrix} 0 & p & 0 & \cdots & 0 \\ q & 0 & p & \cdots & 0 \\ 0 & q & 0 & \ddots & \vdots \\ \vdots & 0 & \ddots & \ddots & p \\ 0 & \vdots & 0 & q & 0 \end{matrix}\right) \]\n\nit follows\n\n\[ {A}^{\prime }\mathbf{x} = \left( {\lambda - a}\right) \mathbf{x} \]\n\nand so from the above theorem,\n\n\[ \left| {\lambda - a}\right| < 1\text{. ∎} \]
Yes
Example 12.1.4 Let \( V = {\mathbb{C}}^{n} \) with the inner product given by\n\n\[ \left( {\mathbf{x},\mathbf{y}}\right) \equiv \mathop{\sum }\limits_{{k = 1}}^{n}{x}_{k}{\bar{y}}_{k} \]
This is an example of a complex inner product space already discussed.
No
Example 12.1.5 Let \( V = {\mathbb{R}}^{n} \) , \[ \left( {\mathbf{x},\mathbf{y}}\right) = \mathbf{x} \cdot \mathbf{y} \equiv \mathop{\sum }\limits_{{j = 1}}^{n}{x}_{j}{y}_{j} \]
This is an example of a real inner product space.
No
Example 12.1.6 Let \( V \) be any finite dimensional vector space and let \( \left\{ {{v}_{1},\cdots ,{v}_{n}}\right\} \) be a basis. Decree that \[ \left( {{v}_{i},{v}_{j}}\right) \equiv {\delta }_{ij} \equiv \left\{ \begin{array}{ll} 1 & \text{ if }i = j \\ 0 & \text{ if }i \neq j \end{array}\right. \] and define the inner product by \[ \left( {x, y}\right) \equiv \mathop{\sum }\limits_{{i = 1}}^{n}{x}^{i}\overline{{y}^{i}} \] where \[ x = \mathop{\sum }\limits_{{i = 1}}^{n}{x}^{i}{v}_{i}, y = \mathop{\sum }\limits_{{i = 1}}^{n}{y}^{i}{v}_{i} \]
The above is well defined because \( \left\{ {{v}_{1},\cdots ,{v}_{n}}\right\} \) is a basis. Thus the components \( {x}_{i} \) associated with any given \( x \in V \) are uniquely determined.
Yes
Theorem 12.1.7 (Cauchy Schwarz) In any inner product space\n\n\\[ \n\\left| \\left( {x, y}\\right) \\right| \\leq \\left| x\\right| \\left| y\\right| \n\\]\n\nwhere \\( \\left| x\\right| \\equiv {\\left( x, x\\right) }^{1/2} \\) .
Proof: Let \\( \\omega \\in \\mathbb{C},\\left| \\omega \\right| = 1 \\), and \\( \\bar{\\omega }\\left( {x, y}\\right) = \\left| \\left( {x, y}\\right) \\right| = \\operatorname{Re}\\left( {x,{y\\omega }}\\right) \\) . Let\n\n\\[ \nF\\left( t\\right) = \\left( {x + {ty\\omega }, x + {t\\omega y}}\\right) .\n\\]\n\nThen from the axioms of the inner product,\n\n\\[ \nF\\left( t\\right) = {\\left| x\\right| }^{2} + {2t}\\operatorname{Re}\\left( {x,{\\omega y}}\\right) + {t}^{2}{\\left| y\\right| }^{2} \\geq 0.\n\\]\n\nThis yields\n\n\\[ \n{\\left| x\\right| }^{2} + {2t}\\left| \\left( {x, y}\\right) \\right| + {t}^{2}{\\left| y\\right| }^{2} \\geq 0.\n\\]\n\nIf \\( \\left| y\\right| = 0 \\), then the inequality requires that \\( \\left| \\left( {x, y}\\right) \\right| = 0 \\) since otherwise, you could pick large negative \\( t \\) and contradict the inequality. If \\( \\left| y\\right| > 0 \\), it follows from the quadratic formula that\n\n\\[ \n4{\\left| \\left( x, y\\right) \\right| }^{2} - 4{\\left| x\\right| }^{2}{\\left| y\\right| }^{2} \\leq 0\\text{. ∎}\n\\]
Yes
Proposition 12.1.8 For an inner product space, \( \left| x\right| \equiv {\left( x, x\right) }^{1/2} \) does specify a norm.
Proof: All the axioms are obvious except the triangle inequality. To verify this,\n\n\[ \n{\left| x + y\right| }^{2} \equiv \left( {x + y, x + y}\right) \equiv {\left| x\right| }^{2} + {\left| y\right| }^{2} + 2\operatorname{Re}\left( {x, y}\right) \n\]\n\n\[ \n\leq {\left| x\right| }^{2} + {\left| y\right| }^{2} + 2\left| \left( {x, y}\right) \right| \n\]\n\n\[ \n\leq {\left| x\right| }^{2} + {\left| y\right| }^{2} + 2\left| x\right| \left| y\right| = {\left( \left| x\right| + \left| y\right| \right) }^{2}\text{. ∎} \n\]
Yes
Lemma 12.2.1 Let \( X \) be a finite dimensional inner product space of dimension \( n \) whose basis is \( \left\{ {{x}_{1},\cdots ,{x}_{n}}\right\} \) . Then there exists an orthonormal basis for \( X,\left\{ {{u}_{1},\cdots ,{u}_{n}}\right\} \) which has the property that for each \( k \leq n \), span \( \left( {{x}_{1},\cdots ,{x}_{k}}\right) = \operatorname{span}\left( {{u}_{1},\cdots ,{u}_{k}}\right) \) .
Proof: Let \( \left\{ {{x}_{1},\cdots ,{x}_{n}}\right\} \) be a basis for \( X \) . Let \( {u}_{1} \equiv {x}_{1}/\left| {x}_{1}\right| \) . Thus for \( k = 1 \), span \( \left( {u}_{1}\right) = \) \( \operatorname{span}\left( {x}_{1}\right) \) and \( \left\{ {u}_{1}\right\} \) is an orthonormal set. Now suppose for some \( k < n,{u}_{1},\cdots ,{u}_{k} \) have been chosen such that \( \left( {{u}_{j},{u}_{l}}\right) = {\delta }_{jl} \) and \( \operatorname{span}\left( {{x}_{1},\cdots ,{x}_{k}}\right) = \operatorname{span}\left( {{u}_{1},\cdots ,{u}_{k}}\right) \) . Then define\n\n\[ \n{u}_{k + 1} \equiv \frac{{x}_{k + 1} - \mathop{\sum }\limits_{{j = 1}}^{k}\left( {{x}_{k + 1},{u}_{j}}\right) {u}_{j}}{\left| {x}_{k + 1} - \mathop{\sum }\limits_{{j = 1}}^{k}\left( {x}_{k + 1},{u}_{j}\right) {u}_{j}\right| }, \n\]\n\n(12.1)\n\nwhere the denominator is not equal to zero because the \( {x}_{j} \) form a basis and so\n\n\[ \n{x}_{k + 1} \notin \operatorname{span}\left( {{x}_{1},\cdots ,{x}_{k}}\right) = \operatorname{span}\left( {{u}_{1},\cdots ,{u}_{k}}\right) \n\]\n\nThus by induction,\n\n\[ \n{u}_{k + 1} \in \operatorname{span}\left( {{u}_{1},\cdots ,{u}_{k},{x}_{k + 1}}\right) = \operatorname{span}\left( {{x}_{1},\cdots ,{x}_{k},{x}_{k + 1}}\right) . \n\]\n\nAlso, \( {x}_{k + 1} \in \operatorname{span}\left( {{u}_{1},\cdots ,{u}_{k},{u}_{k + 1}}\right) \) which is seen easily by solving (12.1) for \( {x}_{k + 1} \) and it follows\n\n\[ \n\operatorname{span}\left( {{x}_{1},\cdots ,{x}_{k},{x}_{k + 1}}\right) = \operatorname{span}\left( {{u}_{1},\cdots ,{u}_{k},{u}_{k + 1}}\right) . \n\]\n\nIf \( l \leq k \), \n\n\[ \n\left( {{u}_{k + 1},{u}_{l}}\right) = C\left( {\left( {{x}_{k + 1},{u}_{l}}\right) - \mathop{\sum }\limits_{{j = 1}}^{k}\left( {{x}_{k + 1},{u}_{j}}\right) \left( {{u}_{j},{u}_{l}}\right) }\right) \n\]\n\n\[ \n= C\left( {\left( {{x}_{k + 1},{u}_{l}}\right) - \mathop{\sum }\limits_{{j = 1}}^{k}\left( {{x}_{k + 1},{u}_{j}}\right) {\delta }_{lj}}\right) \n\]\n\n\[ \n= C\left( {\left( {{x}_{k + 1},{u}_{l}}\right) - \left( {{x}_{k + 1},{u}_{l}}\right) }\right) = 0. \n\]\n\nThe vectors, \( {\left\{ {u}_{j}\right\} }_{j = 1}^{n} \), generated in this way are therefore an orthonormal basis because each vector has unit length. - \n\nThe process by which these vectors were generated is called the Gram Schmidt process.
Yes
Lemma 12.2.3 Suppose \( {\left\{ {u}_{j}\right\} }_{j = 1}^{n} \) is an orthonormal basis for an inner product space \( X \) . Then for all \( x \in X \) , \[ x = \mathop{\sum }\limits_{{j = 1}}^{n}\left( {x,{u}_{j}}\right) {u}_{j} \]
Proof: By assumption that this is an orthonormal basis, \[ \mathop{\sum }\limits_{{j = 1}}^{n}\left( {x,{u}_{j}}\right) \overset{{\delta }_{jl}}{\overbrace{\left( {u}_{j},{u}_{l}\right) }} = \left( {x,{u}_{l}}\right) . \] Letting \( y = \mathop{\sum }\limits_{{k = 1}}^{n}\left( {x,{u}_{k}}\right) {u}_{k} \), it follows \[ \left( {x - y,{u}_{j}}\right) = \left( {x,{u}_{j}}\right) - \mathop{\sum }\limits_{{k = 1}}^{n}\left( {x,{u}_{k}}\right) \left( {{u}_{k},{u}_{j}}\right) \] \[ = \left( {x,{u}_{j}}\right) - \left( {x,{u}_{j}}\right) = 0 \] for all \( j \) . Hence, for any choice of scalars \( {c}^{1},\cdots ,{c}^{n} \) , \[ \left( {x - y,\mathop{\sum }\limits_{{j = 1}}^{n}{c}^{j}{u}_{j}}\right) = 0 \] and so \( \left( {x - y, z}\right) = 0 \) for all \( z \in X \) . Thus this holds in particular for \( z = x - y \) . Therefore, \( x \) \( = y \) . \( \blacksquare \)
Yes
Theorem 12.3.1 Let \( f \in \mathcal{L}\left( {X,\mathbb{F}}\right) \) where \( X \) is an inner product space of dimension \( n \) . Then there exists a unique \( z \in X \) such that for all \( x \in X \) ,\n\n\[ f\left( x\right) = \left( {x, z}\right) . \]
Proof: First I will verify uniqueness. Suppose \( {z}_{j} \) works for \( j = 1,2 \) . Then for all \( x \in X \) ,\n\n\[ 0 = f\left( x\right) - f\left( x\right) = \left( {x,{z}_{1} - {z}_{2}}\right) \]\n\nand so \( {z}_{1} = {z}_{2} \) .\n\nIt remains to verify existence. By Lemma [12.2.1, there exists an orthonormal basis, \( {\left\{ {u}_{j}\right\} }_{j = 1}^{n} \) . Define\n\n\[ z \equiv \mathop{\sum }\limits_{{j = 1}}^{n}\overline{f\left( {u}_{j}\right) }{u}_{j} \]\n\nThen using Lemma 12.2.3,\n\n\[ \left( {x, z}\right) = \left( {x,\mathop{\sum }\limits_{{j = 1}}^{n}\overline{f\left( {u}_{j}\right) }{u}_{j}}\right) = \mathop{\sum }\limits_{{j = 1}}^{n}f\left( {u}_{j}\right) \left( {x,{u}_{j}}\right) \]\n\n\[ = f\left( {\mathop{\sum }\limits_{{j = 1}}^{n}\left( {x,{u}_{j}}\right) {u}_{j}}\right) = f\left( x\right) .\blacksquare \]
Yes
Corollary 12.3.2 Let \( A \in \mathcal{L}\left( {X, Y}\right) \) where \( X \) and \( Y \) are two inner product spaces of finite dimension. Then there exists a unique \( {A}^{ * } \in \mathcal{L}\left( {Y, X}\right) \) such that\n\n\[{\left( Ax, y\right) }_{Y} = {\left( x,{A}^{ * }y\right) }_{X}\]\n\nfor all \( x \in X \) and \( y \in Y \) . The following formula holds\n\n\[{\left( \alpha A + \beta B\right) }^{ * } = \bar{\alpha }{A}^{ * } + \bar{\beta }{B}^{ * }\]
Proof: Let \( {f}_{y} \in \mathcal{L}\left( {X,\mathbb{F}}\right) \) be defined as\n\n\[{f}_{y}\left( x\right) \equiv {\left( Ax, y\right) }_{Y}.\]\n\nThen by the Riesz representation theorem, there exists a unique element of \( X,{A}^{ * }\left( y\right) \) such that\n\n\[{\left( Ax, y\right) }_{Y} = {\left( x,{A}^{ * }\left( y\right) \right) }_{X}.\]\n\nIt only remains to verify that \( {A}^{ * } \) is linear. Let \( a \) and \( b \) be scalars. Then for all \( x \in X \),\n\n\[{\left( x,{A}^{ * }\left( a{y}_{1} + b{y}_{2}\right) \right) }_{X} \equiv {\left( Ax,\left( a{y}_{1} + b{y}_{2}\right) \right) }_{Y}\]\n\n\[\equiv \bar{a}\left( {{Ax},{y}_{1}}\right) + \bar{b}\left( {{Ax},{y}_{2}}\right) \equiv\]\n\n\[\bar{a}\left( {x,{A}^{ * }\left( {y}_{1}\right) }\right) + \bar{b}\left( {x,{A}^{ * }\left( {y}_{2}\right) }\right) = \left( {x, a{A}^{ * }\left( {y}_{1}\right) + b{A}^{ * }\left( {y}_{2}\right) }\right) .\]\n\nSince this holds for every \( x \), it follows\n\n\[{A}^{ * }\left( {a{y}_{1} + b{y}_{2}}\right) = a{A}^{ * }\left( {y}_{1}\right) + b{A}^{ * }\left( {y}_{2}\right)\]\n\nwhich shows \( {A}^{ * } \) is linear as claimed.
Yes
Theorem 12.3.4 Let \( M \) be an \( m \times n \) matrix. Then \( {M}^{ * } = {\left( \bar{M}\right) }^{T} \) in words, the transpose of the conjugate of \( M \) is equal to the adjoint.
Proof: Using the definition of the inner product in \( {\mathbb{C}}^{n} \) ,\n\n\[ \left( {M\mathbf{x},\mathbf{y}}\right) = \left( {\mathbf{x},{M}^{ * }\mathbf{y}}\right) \equiv \mathop{\sum }\limits_{i}{x}_{i}\overline{\mathop{\sum }\limits_{j}{\left( {M}^{ * }\right) }_{ij}{y}_{j}} = \mathop{\sum }\limits_{{i, j}}\overline{{\left( {M}^{ * }\right) }_{ij}}\overline{{y}_{j}}{x}_{i}. \]\n\nAlso\n\n\[ \left( {M\mathbf{x},\mathbf{y}}\right) = \mathop{\sum }\limits_{j}\mathop{\sum }\limits_{i}{M}_{ji}\overline{{y}_{j}}{x}_{i} \]\n\nSince \( \mathbf{x},\mathbf{y} \) are arbitrary vectors, it follows that \( {M}_{ji} = \overline{{\left( {M}^{ * }\right) }_{ij}} \) and so, taking conjugates of both sides,\n\n\[ {M}_{ij}^{ * } = \overline{{M}_{ji}} \]\n\nwhich gives the conclusion of the theorem.
Yes
Theorem 12.3.5 Suppose \( V \) is a subspace of \( {\mathbb{F}}^{n} \) having dimension \( p \leq n \) . Then there exists \( {aQ} \in \mathcal{L}\left( {{\mathbb{F}}^{n},{\mathbb{F}}^{n}}\right) \) such that\n\n\[ \n{QV} \subseteq \operatorname{span}\left( {{\mathbf{e}}_{1},\cdots ,{\mathbf{e}}_{p}}\right)\n\]\n\nand \( \left| {Q\mathbf{x}}\right| = \left| \mathbf{x}\right| \) for all \( \mathbf{x} \) . Also\n\n\[ \n{Q}^{ * }Q = Q{Q}^{ * } = I\n\]
Proof: By Lemma 12.2.1 there exists an orthonormal basis for \( V,{\left\{ {\mathbf{v}}_{i}\right\} }_{i = 1}^{p} \) . By using the Gram Schmidt process this may be extended to an orthonormal basis of the whole space, \( {\mathbb{F}}^{n}, \n\n\[ \n\left\{ {{\mathbf{v}}_{1},\cdots ,{\mathbf{v}}_{p},{\mathbf{v}}_{p + 1},\cdots ,{\mathbf{v}}_{n}}\right\} .\n\]\n\nNow define \( Q \in \mathcal{L}\left( {{\mathbb{F}}^{n},{\mathbb{F}}^{n}}\right) \) by \( Q\left( {\mathbf{v}}_{i}\right) \equiv {\mathbf{e}}_{i} \) and extend linearly. If \( \mathop{\sum }\limits_{{i = 1}}^{n}{x}_{i}{\mathbf{v}}_{i} \) is an arbitrary element of \( {\mathbb{F}}^{n} \), \n\n\[ \n{\left| Q\left( \mathop{\sum }\limits_{{i = 1}}^{n}{x}_{i}{\mathbf{v}}_{i}\right) \right| }^{2} = {\left| \mathop{\sum }\limits_{{i = 1}}^{n}{x}_{i}{\mathbf{e}}_{i}\right| }^{2} = \mathop{\sum }\limits_{{i = 1}}^{n}{\left| {x}_{i}\right| }^{2} = {\left| \mathop{\sum }\limits_{{i = 1}}^{n}{x}_{i}{\mathbf{v}}_{i}\right| }^{2}.\n\]\n\nIt remains to verify that \( {Q}^{ * }Q = Q{Q}^{ * } = I \) . To do so, let \( \mathbf{x},\mathbf{y} \in {\mathbb{F}}^{n} \) . Then \n\n\[ \n\left( {Q\left( {\mathbf{x} + \mathbf{y}}\right), Q\left( {\mathbf{x} + \mathbf{y}}\right) }\right) = \left( {\mathbf{x} + \mathbf{y},\mathbf{x} + \mathbf{y}}\right) .\n\]\n\nThus \n\n\[ \n{\left| Q\mathbf{x}\right| }^{2} + {\left| Q\mathbf{y}\right| }^{2} + 2\operatorname{Re}\left( {Q\mathbf{x}, Q\mathbf{y}}\right) = {\left| \mathbf{x}\right| }^{2} + {\left| \mathbf{y}\right| }^{2} + 2\operatorname{Re}\left( {\mathbf{x},\mathbf{y}}\right)\n\]\n\nand since \( Q \) preserves norms, it follows that for all \( \mathbf{x},\mathbf{y} \in {\mathbb{F}}^{n} \), \n\n\[ \n\operatorname{Re}\left( {Q\mathbf{x}, Q\mathbf{y}}\right) = \operatorname{Re}\left( {\mathbf{x},{Q}^{ * }Q\mathbf{y}}\right) = \operatorname{Re}\left( {\mathbf{x},\mathbf{y}}\right) .\n\]\n\nThus \n\n\[ \n\operatorname{Re}\left( {\mathbf{x},{Q}^{ * }Q\mathbf{y} - \mathbf{y}}\right) = 0\n\]\n\n(12.7)\n\nfor all \( \mathbf{x},\mathbf{y} \) . Let \( \omega \) be a complex number such that \( \left| \omega \right| = 1 \) and \n\n\[ \n\omega \left( {\mathbf{x},{Q}^{ * }Q\mathbf{y} - \mathbf{y}}\right) = \left| \left( {\mathbf{x},{Q}^{ * }Q\mathbf{y} - \mathbf{y}}\right) \right| .\n\]\n\nThen from (12.7), \n\n\[ \n0 = \operatorname{Re}\left( {\omega \mathbf{x},{Q}^{ * }Q\mathbf{y} - \mathbf{y}}\right) = \operatorname{Re}\omega \left( {\mathbf{x},{Q}^{ * }Q\mathbf{y} - \mathbf{y}}\right)\n\]\n\n\[ \n= \left| \left( {\mathbf{x},{Q}^{ * }Q\mathbf{y} - \mathbf{y}}\right) \right|\n\]\n\nand since \( \mathbf{x} \) is arbitrary, it follows that for all \( \mathbf{y} \), \n\n\[ \n{Q}^{ * }Q\mathbf{y} - \mathbf{y} = \mathbf{0}\n\]\n\nThus \n\n\[ \nI = {Q}^{ * }Q\n\]\n\nSimilarly \( Q{Q}^{ * } = I \) . \( \blacksquare \)
Yes
Lemma 12.4.2 Let \( X, Y, Z \) be inner product spaces. Then for \( \alpha \) a scalar,\n\n\[{\left( \alpha \left( y \otimes x\right) \right) }^{ * } = \bar{\alpha }x \otimes y\]
Proof: Let \( u \in X \) and \( v \in Y \) . Then\n\n\[\\left( {\\alpha \\left( {y \\otimes x}\\right) u, v}\\right) = \\left( {\\alpha \\left( {u, x}\\right) y, v}\\right) = \\alpha \\left( {u, x}\\right) \\left( {y, v}\\right)\]\n\nand\n\n\[\\left( {u,\\bar{\\alpha }x \\otimes y\\left( v\\right) }\\right) = \\left( {u,\\bar{\\alpha }\\left( {v, y}\\right) x}\\right) = \\alpha \\left( {y, v}\\right) \\left( {u, x}\\right) .\]\n\nTherefore, this verifies (12.8).\n\nTo verify (12.9), let \( u \\in X \).\n\n\[\\left( {z \\otimes {y}_{1}}\\right) \\left( {{y}_{2} \\otimes x}\\right) \\left( u\\right) = \\left( {u, x}\\right) \\left( {z \\otimes {y}_{1}}\\right) \\left( {y}_{2}\\right) = \\left( {u, x}\\right) \\left( {{y}_{2},{y}_{1}}\\right) z\]\n\nand\n\n\[\\left( {{y}_{2},{y}_{1}}\\right) z \\otimes x\\left( u\\right) = \\left( {{y}_{2},{y}_{1}}\\right) \\left( {u, x}\\right) z.\]\n\nSince the two linear transformations on both sides of 12.9 give the same answer for every \( u \\in X \), it follows the two transformations are the same.
Yes
Theorem 12.4.4 Let \( X \) and \( Y \) be finite dimensional inner product spaces. Then \( \mathcal{L}\left( {X, Y}\right) \) is a vector space with the above definition of what it means to multiply by a scalar and add. Let \( \left\{ {{v}_{1},\cdots ,{v}_{n}}\right\} \) be an orthonormal basis for \( X \) and \( \left\{ {{w}_{1},\cdots ,{w}_{m}}\right\} \) be an orthonormal basis for \( Y \) . Then a basis for \( \mathcal{L}\left( {X, Y}\right) \) is \[ \left\{ {{w}_{j} \otimes {v}_{i} : i = 1,\cdots, n, j = 1,\cdots, m}\right\} .
Proof: It is obvious that \( \mathcal{L}\left( {X, Y}\right) \) is a vector space. It remains to verify the given set is a basis. Consider the following: \[ \left( {\left( {A - \mathop{\sum }\limits_{{k, l}}\left( {A{v}_{k},{w}_{l}}\right) {w}_{l} \otimes {v}_{k}}\right) {v}_{p},{w}_{r}}\right) = \left( {A{v}_{p},{w}_{r}}\right) - \] \[ \mathop{\sum }\limits_{{k, l}}\left( {A{v}_{k},{w}_{l}}\right) \left( {{v}_{p},{v}_{k}}\right) \left( {{w}_{l},{w}_{r}}\right) \] \[ = \left( {A{v}_{p},{w}_{r}}\right) - \mathop{\sum }\limits_{{k, l}}\left( {A{v}_{k},{w}_{l}}\right) {\delta }_{pk}{\delta }_{rl} \] \[ = \left( {A{v}_{p},{w}_{r}}\right) - \left( {A{v}_{p},{w}_{r}}\right) = 0. \] Letting \( A - \mathop{\sum }\limits_{{k, l}}\left( {A{v}_{k},{w}_{l}}\right) {w}_{l} \otimes {v}_{k} = B \), this shows that \( B{v}_{p} = 0 \) since \( {w}_{r} \) is an arbitrary element of the basis for \( Y \) . Since \( {v}_{p} \) is an arbitrary element of the basis for \( X \), it follows \( B = 0 \) as hoped. This has shown \( \left\{ {{w}_{j} \otimes {v}_{i} : i = 1,\cdots, n, j = 1,\cdots, m}\right\} \) spans \( \mathcal{L}\left( {X, Y}\right) \) . It only remains to verify the \( {w}_{j} \otimes {v}_{i} \) are linearly independent. Suppose then that \[ \mathop{\sum }\limits_{{i, j}}{c}_{ij}{w}_{j} \otimes {v}_{i} = 0 \] Then do both sides to \( {v}_{s} \) . By definition this gives \[ 0 = \mathop{\sum }\limits_{{i, j}}{c}_{ij}{w}_{j}\left( {{v}_{s},{v}_{i}}\right) = \mathop{\sum }\limits_{{i, j}}{c}_{ij}{w}_{j}{\delta }_{si} = \mathop{\sum }\limits_{j}{c}_{sj}{w}_{j} \] Now the vectors \( \left\{ {{w}_{1},\cdots ,{w}_{m}}\right\} \) are independent because it is an orthonormal set and so the above requires \( {c}_{sj} = 0 \) for each \( j \) . Since \( s \) was arbitrary, this shows the linear transformations, \( \left\{ {{w}_{j} \otimes {v}_{i}}\right\} \) form a linearly independent set.
Yes
Theorem 12.4.5 Let \( A = \mathop{\sum }\limits_{{i, j}}{c}_{ij}{w}_{i} \otimes {v}_{j} \in \mathcal{L}\left( {X, Y}\right) \) where as before, the vectors, \( \left\{ {w}_{i}\right\} \) are an orthonormal basis for \( Y \) and the vectors, \( \left\{ {v}_{j}\right\} \) are an orthonormal basis for \( X \) . Then if the matrix of \( A \) has entries \( {M}_{ij} \), it follows that \( {M}_{ij} = {c}_{ij} \) .
Proof: Recall\n\n\[ \nA{v}_{i} \equiv \mathop{\sum }\limits_{k}{M}_{ki}{w}_{k} \n\]\n\nAlso\n\n\[ \nA{v}_{i} = \mathop{\sum }\limits_{{k, j}}{c}_{kj}{w}_{k} \otimes {v}_{j}\left( {v}_{i}\right) = \mathop{\sum }\limits_{{k, j}}{c}_{kj}{w}_{k}\left( {{v}_{i},{v}_{j}}\right) \n\]\n\n\[ \n= \mathop{\sum }\limits_{{k, j}}{c}_{kj}{w}_{k}{\delta }_{ij} = \mathop{\sum }\limits_{k}{c}_{ki}{w}_{k} \n\]\n\nTherefore,\n\n\[ \n\mathop{\sum }\limits_{k}{M}_{ki}{w}_{k} = \mathop{\sum }\limits_{k}{c}_{ki}{w}_{k} \n\]\n\nand so \( {M}_{ki} = {c}_{ki} \) for all \( k \) . This happens for each \( i \) . \( \blacksquare \)
Yes
Lemma 12.5.1 Let \( V \) and \( W \) be finite dimensional inner product spaces and let \( A : V \rightarrow W \) be linear. For each \( y \in W \) there exists \( x \in V \) such that\n\n\[ \left| {{Ax} - y}\right| \leq \left| {A{x}_{1} - y}\right| \]\n\nfor all \( {x}_{1} \in V \) . Also, \( x \in V \) is a solution to this minimization problem if and only if \( x \) is a solution to the equation, \( {A}^{ * }{Ax} = {A}^{ * }y \) .
Proof: By Theorem 12.2.4 on Page 291 there exists a point, \( A{x}_{0} \), in the finite dimensional subspace, \( A\left( V\right) \), of \( W \) such that for all \( x \in V,{\left| Ax - y\right| }^{2} \geq {\left| A{x}_{0} - y\right| }^{2} \) . Also, from this theorem, this happens if and only if \( A{x}_{0} - y \) is perpendicular to every \( {Ax} \in A\left( V\right) \) . Therefore, the solution is characterized by \( \left( {A{x}_{0} - y,{Ax}}\right) = 0 \) for all \( x \in V \) which is the same as saying \( \left( {{A}^{ * }A{x}_{0} - {A}^{ * }y, x}\right) = 0 \) for all \( x \in V \) . In other words the solution is obtained by solving \( {A}^{ * }A{x}_{0} = {A}^{ * }y \) for \( {x}_{0} \) . ∎
Yes
Theorem 12.6.2 Let \( A : V \rightarrow W \) where \( A \) is linear and \( V \) and \( W \) are inner product spaces. Then \( A\left( V\right) = \ker {\left( {A}^{ * }\right) }^{ \bot } \) .
Proof: Let \( y = {Ax} \) so \( y \in A\left( V\right) \) . Then if \( {A}^{ * }z = 0 \) ,\n\n\[ \left( {y, z}\right) = \left( {{Ax}, z}\right) = \left( {x,{A}^{ * }z}\right) = 0 \]\n\nshowing that \( y \in \ker {\left( {A}^{ * }\right) }^{ \bot } \) . Thus \( A\left( V\right) \subseteq \ker {\left( {A}^{ * }\right) }^{ \bot } \) .\n\nNow suppose \( y \in \ker {\left( {A}^{ * }\right) }^{ \bot } \) . Does there exists \( x \) such that \( {Ax} = y \) ? Since this might not be immediately clear, take the least squares solution to the problem. Thus let \( x \) be a solution to \( {A}^{ * }{Ax} = {A}^{ * }y \) . It follows \( {A}^{ * }\left( {y - {Ax}}\right) = 0 \) and so \( y - {Ax} \in \ker \left( {A}^{ * }\right) \) which implies from the assumption about \( y \) that \( \left( {y - {Ax}, y}\right) = 0 \) . Also, since \( {Ax} \) is the closest point to \( y \) in \( A\left( V\right) \), Theorem 12.2.4 on Page 291 implies that \( \left( {y - {Ax}, A{x}_{1}}\right) = 0 \) for all \( {x}_{1} \in V \) .\n\nIn particular this is true for \( {x}_{1} = x \) and so \( 0 = \left( {y - {Ax}, y}\right) - \overset{⏜}{\left( y - Ax, Ax\right) } = {\left| y - Ax\right| }^{2} \) , showing that \( y = {Ax} \) . Thus \( A\left( V\right) \supseteq \ker {\left( {A}^{ * }\right) }^{ \bot } \) . ∎
Yes
Corollary 12.6.3 Let \( A, V \), and \( W \) be as described above. If the only solution to \( {A}^{ * }y = 0 \) is \( y = 0 \), then \( A \) is onto \( W \) .
Proof: If the only solution to \( {A}^{ * }y = 0 \) is \( y = 0 \), then \( \ker \left( {A}^{ * }\right) = \{ 0\} \) and so every vector from \( W \) is contained in \( \ker {\left( {A}^{ * }\right) }^{ \bot } \) and by the above theorem, this shows \( A\left( V\right) = W \) . ∎
Yes
Lemma 13.1.3 Let \( A \) be an \( n \times n \) matrix and let \( B \) be an \( m \times m \) matrix. Denote by \( C \) the matrix \( C \equiv \left( \begin{matrix} A & 0 \\ 0 & B \end{matrix}\right) \). Then \( C \) is diagonalizable if and only if both \( A \) and \( B \) are diagonalizable.
Proof: Suppose \( {S}_{A}^{-1}A{S}_{A} = {D}_{A} \) and \( {S}_{B}^{-1}B{S}_{B} = {D}_{B} \) where \( {D}_{A} \) and \( {D}_{B} \) are diagonal matrices. You should use block multiplication to verify that \( S \equiv \left( \begin{matrix} {S}_{A} & 0 \\ 0 & {S}_{B} \end{matrix}\right) \) is such that \( {S}^{-1}{CS} = {D}_{C} \), a diagonal matrix. Conversely, suppose \( C \) is diagonalized by \( S = \left( {{\mathbf{s}}_{1},\cdots ,{\mathbf{s}}_{n + m}}\right) \) . Thus \( S \) has columns \( {\mathbf{s}}_{i} \) . For each of these columns, write in the form \( {\mathbf{s}}_{i} = \left( \begin{array}{l} {\mathbf{x}}_{i} \\ {\mathbf{y}}_{i} \end{array}\right) \) where \( {\mathbf{x}}_{i} \in {\mathbb{F}}^{n} \) and where \( {\mathbf{y}}_{i} \in {\mathbb{F}}^{m} \) . The result is \( S = \left( \begin{array}{ll} {S}_{11} & {S}_{12} \\ {S}_{21} & {S}_{22} \end{array}\right) \) where \( {S}_{11} \) is an \( n \times n \) matrix and \( {S}_{22} \) is an \( m \times m \) matrix. Then there is a diagonal matrix \( D = \operatorname{diag}\left( {{\lambda }_{1},\cdots ,{\lambda }_{n + m}}\right) = \left( \begin{matrix} {D}_{1} & 0 \\ 0 & {D}_{2} \end{matrix}\right) \) such that \( \left( \begin{matrix} A & 0 \\ 0 & B \end{matrix}\right) \left( \begin{array}{ll} {S}_{11} & {S}_{12} \\ {S}_{21} & {S}_{22} \end{array}\right) = \left( \begin{matrix} {S}_{11} & {S}_{12} \\ {S}_{21} & {S}_{22} \end{matrix}\right) \left( \begin{matrix} {D}_{1} & 0 \\ 0 & {D}_{2} \end{matrix}\right) \). Hence by block multiplication \( A{S}_{11} = {S}_{11}{D}_{1}, B{S}_{22} = {S}_{22}{D}_{2} \). It follows each of the \( {\mathbf{x}}_{i} \) is an eigenvector of \( A \) or else is the zero vector and that each of the \( {\mathbf{y}}_{i} \) is an eigenvector of \( B \) or is the zero vector. If there are \( n \) linearly independent \( {\mathbf{x}}_{i} \), then \( A \) is diagonalizable by Theorem 9.3.12 on Page 9.3.12. The row rank of the matrix \( \left( {{\mathbf{x}}_{1},\cdots ,{\mathbf{x}}_{n + m}}\right) \) must be \( n \) because if this is not so, the rank of \( S \) would be less than \( n + m \) which would mean \( {S}^{-1} \) does not exist. Therefore, since the column rank equals the row rank, this matrix has column rank equal to \( n \) and this means there are \( n \) linearly independent eigenvectors of \( A \) implying that \( A \) is diagonalizable. Similar reasoning applies to \( B \) .
Yes
Lemma 13.1.6 If \( \mathcal{F} \) is a set of \( n \times n \) matrices which is simultaneously diagonalizable, then \( \mathcal{F} \) is a commuting family of matrices.
Proof: Let \( A, B \in \mathcal{F} \) and let \( S \) be a matrix which has the property that \( {S}^{-1}{AS} \) is a diagonal matrix for all \( A \in \mathcal{F} \) . Then \( {S}^{-1}{AS} = {D}_{A} \) and \( {S}^{-1}{BS} = {D}_{B} \) where \( {D}_{A} \) and \( {D}_{B} \) are diagonal matrices. Since diagonal matrices commute,\n\n\[ \n{AB} = S{D}_{A}{S}^{-1}S{D}_{B}{S}^{-1} = S{D}_{A}{D}_{B}{S}^{-1} \n\]\n\n\[ \n= S{D}_{B}{D}_{A}{S}^{-1} = S{D}_{B}{S}^{-1}S{D}_{A}{S}^{-1} = {BA}\text{.} \n\]
Yes
Lemma 13.1.7 Let \( D \) be a diagonal matrix of the form\n\n\[ D \equiv \left( \begin{matrix} {\lambda }_{1}{I}_{{n}_{1}} & 0 & \cdots & 0 \\ 0 & {\lambda }_{2}{I}_{{n}_{2}} & \ddots & \vdots \\ \vdots & \ddots & \ddots & 0 \\ 0 & \cdots & 0 & {\lambda }_{r}{I}_{{n}_{r}} \end{matrix}\right) \]\n\nwhere \( {I}_{{n}_{i}} \) denotes the \( {n}_{i} \times {n}_{i} \) identity matrix and \( {\lambda }_{i} \neq {\lambda }_{j} \) for \( i \neq j \) and suppose \( B \) is a matrix which commutes with \( D \) . Then \( B \) is a block diagonal matrix of the form\n\n\[ B = \left( \begin{matrix} {B}_{1} & 0 & \cdots & 0 \\ 0 & {B}_{2} & \ddots & \vdots \\ \vdots & \ddots & \ddots & 0 \\ 0 & \cdots & 0 & {B}_{r} \end{matrix}\right) \]\n\nwhere \( {B}_{i} \) is an \( {n}_{i} \times {n}_{i} \) matrix.
Proof: Let \( B = \left( {B}_{ij}\right) \) where \( {B}_{ii} = {B}_{i} \) a block matrix as above in (13.2).\n\n\[ \left( \begin{matrix} {B}_{11} & {B}_{12} & \cdots & {B}_{1r} \\ {B}_{21} & {B}_{22} & \ddots & {B}_{2r} \\ \vdots & \ddots & \ddots & \vdots \\ {B}_{r1} & {B}_{r2} & \cdots & {B}_{rr} \end{matrix}\right) \]\n\nThen by block multiplication, since \( B \) is given to commute with \( D \) ,\n\n\[ {\lambda }_{j}{B}_{ij} = {\lambda }_{i}{B}_{ij} \]\n\nTherefore, if \( i \neq j,{B}_{ij} = 0 \) . \( \blacksquare \)
Yes
Lemma 13.1.8 Let \( \mathcal{F} \) denote a commuting family of \( n \times n \) matrices such that each \( A \in \mathcal{F} \) is diagonalizable. Then \( \mathcal{F} \) is simultaneously diagonalizable.
Proof: First note that if every matrix in \( \mathcal{F} \) has only one eigenvalue, there is nothing to prove. This is because for \( A \) such a matrix,\n\n\[ \n{S}^{-1}{AS} = {\lambda I} \n\]\n\nand so\n\n\[ \nA = {\lambda I} \n\]\n\nThus all the matrices in \( \mathcal{F} \) are diagonal matrices and you could pick any \( S \) to diagonalize them all. Therefore, without loss of generality, assume some matrix in \( \mathcal{F} \) has more than one eigenvalue.\n\nThe significant part of the lemma is proved by induction on \( n \) . If \( n = 1 \), there is nothing to prove because all the \( 1 \times 1 \) matrices are already diagonal matrices. Suppose then that the theorem is true for all \( k \leq n - 1 \) where \( n \geq 2 \) and let \( \mathcal{F} \) be a commuting family of diagonalizable \( n \times n \) matrices. Pick \( A \in \mathcal{F} \) which has more than one eigenvalue and let \( S \) be an invertible matrix such that \( {S}^{-1}{AS} = D \) where \( D \) is of the form given in (13.1). By permuting the columns of \( S \) there is no loss of generality in assuming \( D \) has this form. Now denote by \( \widetilde{\mathcal{F}} \) the collection of matrices, \( \left\{ {{S}^{-1}{CS} : C \in \mathcal{F}}\right\} \) . Note \( \widetilde{\mathcal{F}} \) features the single matrix \( S \).\n\nIt follows easily that \( \widetilde{\mathcal{F}} \) is also a commuting family of diagonalizable matrices. By Lemma 13.1.7 every \( B \in \widetilde{\mathcal{F}} \) is of the form given in (13.2) because each of these commutes with \( D \) described above as \( {S}^{-1}{AS} \) and so by block multiplication, the diagonal blocks \( {B}_{i} \) corresponding to different \( B \in \widetilde{\mathcal{F}} \) commute.\n\nBy Corollary 1.3.1.4 each of these blocks is diagonalizable. This is because \( B \) is known to be so. Therefore, by induction, since all the blocks are no larger than \( n - 1 \times n - 1 \) thanks to the assumption that \( A \) has more than one eigenvalue, there exist invertible \( {n}_{i} \times {n}_{i} \) matrices, \( {T}_{i} \) such that \( {T}_{i}^{-1}{B}_{i}{T}_{i} \) is a diagonal matrix whenever \( {B}_{i} \) is one of the matrices making up the block diagonal of any \( B \in \mathcal{F} \) . It follows that for \( T \) defined by\n\n\[ \nT \equiv \left( \begin{matrix} {T}_{1} & 0 & \cdots & 0 \\ 0 & {T}_{2} & \ddots & \vdots \\ \vdots & \ddots & \ddots & 0 \\ 0 & \cdots & 0 & {T}_{r} \end{matrix}\right) \n\]\n\nthen \( {T}^{-1}{BT} = \) a diagonal matrix for every \( B \in \widetilde{\mathcal{F}} \) including \( D \) . Consider \( {ST} \) . It follows that for all \( C \in \mathcal{F} \) ,\n\n\[ \n{T}^{-1}\overset{\text{something in }}{\overbrace{{S}^{-1}{CS}}}T = {\left( ST\right) }^{-1}C\left( {ST}\right) = \text{ a diagonal matrix. } \n\]
Yes
Theorem 13.1.9 Let \( \mathcal{F} \) denote a family of matrices which are diagonalizable. Then \( \mathcal{F} \) is simultaneously diagonalizable if and only if \( \mathcal{F} \) is a commuting family.
Proof: If \( \mathcal{F} \) is a commuting family, it follows from Lemma 13.1.8 that it is simultaneously diagonalizable. If it is simultaneously diagonalizable, then it follows from Lemma 13.1.6 that it is a commuting family.
Yes
Theorem 13.2.4 Let \( L \in \mathcal{L}\left( {H, H}\right) \) where \( H \) is an n dimensional inner product space. If \( L \) is Hermitian, then all of its eigenvalues \( {\lambda }_{k} \) are real and there exists an orthonormal basis of eigenvectors \( \left\{ {\mathbf{w}}_{k}\right\} \) such that\n\n\[ L = \mathop{\sum }\limits_{k}{\lambda }_{k}{\mathbf{w}}_{k} \otimes {\mathbf{w}}_{k} \]
Proof: By Schur’s theorem, Theorem 13.2.2, there exist \( {l}_{ij} \in \mathbb{F} \) such that\n\n\[ L = \mathop{\sum }\limits_{{j = 1}}^{n}\mathop{\sum }\limits_{{i = 1}}^{j}{l}_{ij}{\mathbf{w}}_{i} \otimes {\mathbf{w}}_{j} \]\n\nThen by Lemma 12.4.2,\n\n\[ \mathop{\sum }\limits_{{j = 1}}^{n}\mathop{\sum }\limits_{{i = 1}}^{j}{l}_{ij}{\mathbf{w}}_{i} \otimes {\mathbf{w}}_{j}\; = \;L = {L}^{ * } = \mathop{\sum }\limits_{{j = 1}}^{n}\mathop{\sum }\limits_{{i = 1}}^{j}{\left( {l}_{ij}{\mathbf{w}}_{i} \otimes {\mathbf{w}}_{j}\right) }^{ * } \]\n\n\[ = \mathop{\sum }\limits_{{j = 1}}^{n}\mathop{\sum }\limits_{{i = 1}}^{j}\overline{{l}_{ij}}{\mathbf{w}}_{j} \otimes {\mathbf{w}}_{i} = \mathop{\sum }\limits_{{i = 1}}^{n}\mathop{\sum }\limits_{{j = 1}}^{i}\overline{{l}_{ji}}{\mathbf{w}}_{i} \otimes {\mathbf{w}}_{j} \]\n\nBy independence, if \( i = j \) ,\n\n\[ {l}_{ii} = \overline{{l}_{ii}} \]\n\nand so these are all real. If \( i < j \), it follows from independence again that\n\n\[ {l}_{ij} = 0 \]\n\nbecause the coefficients corresponding to \( i < j \) are all 0 on the right side. Similarly if \( i > j \) , it follows \( {l}_{ij} = 0 \) . Letting \( {\lambda }_{k} = {l}_{kk} \), this shows\n\n\[ L = \mathop{\sum }\limits_{k}{\lambda }_{k}{\mathbf{w}}_{k} \otimes {\mathbf{w}}_{k} \]\n\nThat each of these \( {\mathbf{w}}_{k} \) is an eigenvector corresponding to \( {\lambda }_{k} \) is obvious from the definition of the tensor product.
Yes
Corollary 13.3.5 Let \( A \in \mathcal{L}\left( {X, X}\right) \) be self adjoint (Hermitian) where \( X \) is a finite dimensional Hilbert space. Then the largest eigenvalue of \( A \) is given by\n\n\[ \max \{ \left( {A\mathbf{x},\mathbf{x}}\right) : \left| \mathbf{x}\right| = 1\} \]\n\n(13.6)\n\nand the minimum eigenvalue of \( A \) is given by\n\n\[ \min \{ \left( {A\mathbf{x},\mathbf{x}}\right) : \left| \mathbf{x}\right| = 1\} \]\n\n(13.7)
Proof: The proof of this is just like the proof of Theorem [13,3,3]. Simply replace inf with sup and obtain a decreasing list of eigenvalues. This establishes (13.6). The claim (13.7) follows from Theorem 13.3.3.
No
Corollary 13.3.6 Let \( A \in \mathcal{L}\left( {X, X}\right) \) where \( A \) is self adjoint. Then \( A = \mathop{\sum }\limits_{i}{\lambda }_{i}{v}_{i} \otimes {v}_{i} \) where \( A{v}_{i} = {\lambda }_{i}{v}_{i} \) and \( {\left\{ {v}_{i}\right\} }_{i = 1}^{n} \) is an orthonormal basis.
Proof : If \( {v}_{k} \) is one of the orthonormal basis vectors, \( A{v}_{k} = {\lambda }_{k}{v}_{k} \) . Also,\n\n\[ \mathop{\sum }\limits_{i}{\lambda }_{i}{v}_{i} \otimes {v}_{i}\left( {v}_{k}\right) = \mathop{\sum }\limits_{i}{\lambda }_{i}{v}_{i}\left( {{v}_{k},{v}_{i}}\right) \]\n\n\[ = \mathop{\sum }\limits_{i}{\lambda }_{i}{\delta }_{ik}{v}_{i} = {\lambda }_{k}{v}_{k} \]\n\nSince the two linear transformations agree on a basis, it follows they must coincide. -
Yes
Theorem 13.3.7 Let \( A \in \mathcal{L}\left( {X, X}\right) \) be self adjoint where \( X \) is a finite dimensional Hilbert space. Then for \( {\lambda }_{1} \leq {\lambda }_{2} \leq \cdots \leq {\lambda }_{n} \) the eigenvalues of \( A \), there exist orthonormal vectors \( \left\{ {{u}_{1},\cdots ,{u}_{n}}\right\} \) for which\n\n\[ A{u}_{k} = {\lambda }_{k}{u}_{k} \]\n\nFurthermore,\n\n\[ {\lambda }_{k} \equiv \mathop{\max }\limits_{{{w}_{1},\cdots ,{w}_{k - 1}}}\left\{ {\min \left\{ {\left( {{Ax}, x}\right) : \left| x\right| = 1, x \in {\left\{ {w}_{1},\cdots ,{w}_{k - 1}\right\} }^{ \bot }}\right\} }\right\} \]\n\n(13.8)\n\nwhere if \( k = 1,{\left\{ {w}_{1},\cdots ,{w}_{k - 1}\right\} }^{ \bot } \equiv X \) .
Proof: From Theorem 13.3.3, there exist eigenvalues and eigenvectors with \( \left\{ {{u}_{1},\cdots ,{u}_{n}}\right\} \) orthonormal and \( {\lambda }_{i} \leq {\lambda }_{i + 1} \) . Therefore, by Corollary 13.3.6\n\n\[ A = \mathop{\sum }\limits_{{j = 1}}^{n}{\lambda }_{j}{u}_{j} \otimes {u}_{j} \]\n\nFix \( \left\{ {{w}_{1},\cdots ,{w}_{k - 1}}\right\} \).\n\n\[ \left( {{Ax}, x}\right) = \mathop{\sum }\limits_{{j = 1}}^{n}{\lambda }_{j}\left( {x,{u}_{j}}\right) \left( {{u}_{j}, x}\right) = \mathop{\sum }\limits_{{j = 1}}^{n}{\lambda }_{j}{\left| \left( x,{u}_{j}\right) \right| }^{2} \]\n\nThen let \( Y = {\left\{ {w}_{1},\cdots ,{w}_{k - 1}\right\} }^{ \bot }\)\n\n\[ \inf \{ \left( {{Ax}, x}\right) : \left| x\right| = 1, x \in Y\} = \inf \left\{ {\mathop{\sum }\limits_{{j = 1}}^{n}{\lambda }_{j}{\left| \left( x,{u}_{j}\right) \right| }^{2} : \left| x\right| = 1, x \in Y}\right\} \]\n\n\[ \leq \inf \left\{ {\mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{j}{\left| \left( x,{u}_{j}\right) \right| }^{2} : \left| x\right| = 1,\left( {x,{u}_{j}}\right) = 0\text{ for }j > k,\text{ and }x \in Y}\right\} .\n\n(13.9)\n\nThe reason this is so is that the infimum is taken over a smaller set. Therefore, the infimum gets larger. Now (13.9) is no larger than\n\n\[ \inf \left\{ {{\lambda }_{k}\mathop{\sum }\limits_{{j = 1}}^{k}{\left| \left( x,{u}_{j}\right) \right| }^{2} : \left| x\right| = 1,\left( {x,{u}_{j}}\right) = 0\text{ for }j > k,\text{ and }x \in Y}\right\} = {\lambda }_{k} \]\n\nbecause since \( \left\{ {{u}_{1},\cdots ,{u}_{n}}\right\} \) is an orthonormal basis, \( {\left| x\right| }^{2} = \mathop{\sum }\limits_{{j = 1}}^{n}{\left| \left( x,{u}_{j}\right) \right| }^{2} \) . It follows since \( \left\{ {{w}_{1},\cdots ,{w}_{k - 1}}\right\} \) is arbitrary,\n\n\[ \mathop{\sup }\limits_{{{w}_{1},\cdots ,{w}_{k - 1}}}\left\{ {\inf \left\{ {\left( {{Ax}, x}\right) : \left| x\right| = 1, x \in {\left\{ {w}_{1},\cdots ,{w}_{k - 1}\right\} }^{ \bot }}\right\} }\right\} \leq {\lambda }_{k}.\n\n(13.10)\n\nHowever, for each \( {w}_{1},\cdots ,{w}_{k - 1} \), the infimum is achieved so you can replace the inf in the above with min. In addition to this, it follows from Corollary 13.3.4 that there exists a set, \( \left\{ {{w}_{1},\cdots ,{w}_{k - 1}}\right\} \) for which\n\n\[ \inf \left\{ {\left( {{Ax}, x}\right) : \left| x\right| = 1, x \in {\left\{ {w}_{1},\cdots ,{w}_{k - 1}\right\} }^{ \bot }}\right\} = {\lambda }_{k}. \]\n\nPick \( \left\{ {{w}_{1},\cdots ,{w}_{k - 1}}\right\} = \left\{ {{u}_{1},\cdots ,{u}_{k - 1}}\right\} \) . Therefore, the sup in (13.10) is achieved and equals \( {\lambda }_{k} \) and (13.8) follows. -
Yes
Corollary 13.3.8 Let \( A \in \mathcal{L}\left( {X, X}\right) \) be self adjoint where \( X \) is a finite dimensional Hilbert space. Then for \( {\lambda }_{1} \leq {\lambda }_{2} \leq \cdots \leq {\lambda }_{n} \) the eigenvalues of \( A \), there exist orthonormal vectors \( \left\{ {{u}_{1},\cdots ,{u}_{n}}\right\} \) for which\n\n\[ A{u}_{k} = {\lambda }_{k}{u}_{k} \]\n\nFurthermore,\n\n\[ {\lambda }_{k} \equiv \mathop{\max }\limits_{{{w}_{1},\cdots ,{w}_{k - 1}}}\left\{ {\min \left\{ {\frac{\left( Ax, x\right) }{{\left| x\right| }^{2}} : x \neq 0, x \in {\left\{ {w}_{1},\cdots ,{w}_{k - 1}\right\} }^{ \bot }}\right\} }\right\} \]
\[ {\lambda }_{k} \equiv \mathop{\max }\limits_{{{w}_{1},\cdots ,{w}_{k - 1}}}\left\{ {\min \left\{ {\frac{\left( Ax, x\right) }{{\left| x\right| }^{2}} : x \neq 0, x \in {\left\{ {w}_{1},\cdots ,{w}_{k - 1}\right\} }^{ \bot }}\right\} }\right\} \]
Yes
Corollary 13.3.9 Let \( A \in \mathcal{L}\left( {X, X}\right) \) be self adjoint where \( X \) is a finite dimensional Hilbert space. Then for \( {\lambda }_{1} \leq {\lambda }_{2} \leq \cdots \leq {\lambda }_{n} \) the eigenvalues of \( A \), there exist orthonormal vectors \( \left\{ {{u}_{1},\cdots ,{u}_{n}}\right\} \) for which\n\n\[ A{u}_{k} = {\lambda }_{k}{u}_{k} \]\n\nFurthermore,\n\n\[ {\lambda }_{k} \equiv \mathop{\min }\limits_{{{w}_{1},\cdots ,{w}_{n - k}}}\left\{ {\max \left\{ {\frac{\left( Ax, x\right) }{{\left| x\right| }^{2}} : x \neq 0, x \in {\left\{ {w}_{1},\cdots ,{w}_{n - k}\right\} }^{ \bot }}\right\} }\right\} \]
\[ {\lambda }_{k} \equiv \mathop{\min }\limits_{{{w}_{1},\cdots ,{w}_{n - k}}}\left\{ {\max \left\{ {\frac{\left( Ax, x\right) }{{\left| x\right| }^{2}} : x \neq 0, x \in {\left\{ {w}_{1},\cdots ,{w}_{n - k}\right\} }^{ \bot }}\right\} }\right\} \]\n\n(13.12)\n\nwhere if \( k = n,{\left\{ {w}_{1},\cdots ,{w}_{n - k}\right\} }^{ \bot } \equiv X \) .
Yes
Corollary 13.4.2 Let \( X \) be a finite dimensional Hilbert space and let \( \left\{ {{v}_{1},\cdots ,{v}_{n}}\right\} \) be an orthonormal basis for \( X \) . Also, let \( q \) be the coordinate map associated with this basis satisfying \( q\left( \mathbf{x}\right) \equiv \mathop{\sum }\limits_{i}{x}_{i}{v}_{i} \) . Then \( {\left( \mathbf{x},\mathbf{y}\right) }_{{\mathbb{F}}^{n}} = {\left( q\left( \mathbf{x}\right), q\left( \mathbf{y}\right) \right) }_{X} \) . Also, if \( A \in \mathcal{L}\left( {X, X}\right) \), and \( M\left( A\right) \) is the matrix of \( A \) with respect to this basis,
\[ {\left( Aq\left( \mathbf{x}\right), q\left( \mathbf{y}\right) \right) }_{X} = {\left( M\left( A\right) \mathbf{x},\mathbf{y}\right) }_{{\mathbb{F}}^{n}}. \]
Yes
Lemma 13.4.4 Let \( X \) be a finite dimensional Hilbert space. A self adjoint \( A \in \mathcal{L}\left( {X, X}\right) \) is positive definite if and only if all its eigenvalues are positive and negative definite if and only if all its eigenvalues are negative. It is positive semidefinite if all the eigenvalues are nonnegative and it is negative semidefinite if all the eigenvalues are nonpositive.
Proof: Suppose first that \( A \) is positive definite and let \( \lambda \) be an eigenvalue. Then for \( \mathbf{x} \) an eigenvector corresponding to \( \lambda ,\lambda \left( {\mathbf{x},\mathbf{x}}\right) = \left( {\lambda \mathbf{x},\mathbf{x}}\right) = \left( {A\mathbf{x},\mathbf{x}}\right) > 0 \) . Therefore, \( \lambda > 0 \) as claimed.\n\nNow suppose all the eigenvalues of \( A \) are positive. From Theorem 13.3.3 and Corollary 13.36, \( A = \mathop{\sum }\limits_{{i = 1}}^{n}{\lambda }_{i}{\mathbf{u}}_{i} \otimes {\mathbf{u}}_{i} \) where the \( {\lambda }_{i} \) are the positive eigenvalues and \( \left\{ {\mathbf{u}}_{i}\right\} \) are an orthonormal set of eigenvectors. Therefore, letting \( \mathbf{x} \neq \mathbf{0} \), \n\n\[ \left( {A\mathbf{x},\mathbf{x}}\right) = \left( {\left( {\mathop{\sum }\limits_{{i = 1}}^{n}{\lambda }_{i}{\mathbf{u}}_{i} \otimes {\mathbf{u}}_{i}}\right) \mathbf{x},\mathbf{x}}\right) = \left( {\mathop{\sum }\limits_{{i = 1}}^{n}{\lambda }_{i}{\mathbf{u}}_{i}\left( {\mathbf{x},{\mathbf{u}}_{i}}\right) ,\mathbf{x}}\right) \]\n\n\[ = \left( {\mathop{\sum }\limits_{{i = 1}}^{n}{\lambda }_{i}\left( {\mathbf{x},{\mathbf{u}}_{i}}\right) \left( {{\mathbf{u}}_{i},\mathbf{x}}\right) }\right) = \mathop{\sum }\limits_{{i = 1}}^{n}{\lambda }_{i}{\left| \left( {\mathbf{u}}_{i},\mathbf{x}\right) \right| }^{2} > 0 \]\n\nbecause, since \( \left\{ {\mathbf{u}}_{i}\right\} \) is an orthonormal basis, \( {\left| \mathbf{x}\right| }^{2} = \mathop{\sum }\limits_{{i = 1}}^{n}{\left| \left( {\mathbf{u}}_{i},\mathbf{x}\right) \right| }^{2} \).\n\nTo establish the claim about negative definite, it suffices to note that \( A \) is negative definite if and only if \( - A \) is positive definite and the eigenvalues of \( A \) are \( \left( {-1}\right) \) times the eigenvalues of \( - A \) . The claims about positive semidefinite and negative semidefinite are obtained similarly.
Yes
Theorem 13.4.6 Let \( X \) be a finite dimensional Hilbert space and let \( A \in \mathcal{L}\left( {X, X}\right) \) be self adjoint. Then \( A \) is positive definite if and only if \( \det \left( {M{\left( A\right) }_{k}}\right) > 0 \) for every \( k = 1,\cdots, n \) . Here \( M\left( A\right) \) denotes the matrix of \( A \) with respect to some fixed orthonormal basis of \( X \) .
Proof: This theorem is proved by induction on \( n \) . It is clearly true if \( n = 1 \) . Suppose then that it is true for \( n - 1 \) where \( n \geq 2 \) . Since \( \det \left( {M\left( A\right) }\right) > 0 \), it follows that all the eigenvalues are nonzero. Are they all positive? Suppose not. Then there is some even number of them which are negative, even because the product of all the eigenvalues is known to be positive, equaling \( \det \left( {M\left( A\right) }\right) \) . Pick two, \( {\lambda }_{1} \) and \( {\lambda }_{2} \) and let \( M\left( A\right) {\mathbf{u}}_{i} = {\lambda }_{i}{\mathbf{u}}_{i} \) where \( {\mathbf{u}}_{i} \neq \mathbf{0} \) for \( i = 1,2 \) and \( \left( {{\mathbf{u}}_{1},{\mathbf{u}}_{2}}\right) = 0 \) . Now if \( \mathbf{y} \equiv {\alpha }_{1}{\mathbf{u}}_{1} + {\alpha }_{2}{\mathbf{u}}_{2} \) is an element of \( \operatorname{span}\left( {{\mathbf{u}}_{1},{\mathbf{u}}_{2}}\right) \), then since these are eigenvalues and \( \left( {{\mathbf{u}}_{1},{\mathbf{u}}_{2}}\right) = 0 \), a short computation shows\n\n\[ \left( {M\left( A\right) \left( {{\alpha }_{1}{\mathbf{u}}_{1} + {\alpha }_{2}{\mathbf{u}}_{2}}\right) ,{\alpha }_{1}{\mathbf{u}}_{1} + {\alpha }_{2}{\mathbf{u}}_{2}}\right) \]\n\n\[ = {\left| {\alpha }_{1}\right| }^{2}{\lambda }_{1}{\left| {\mathbf{u}}_{1}\right| }^{2} + {\left| {\alpha }_{2}\right| }^{2}{\lambda }_{2}{\left| {\mathbf{u}}_{2}\right| }^{2} < 0. \]\n\nNow letting \( \mathbf{x} \in {\mathbb{C}}^{n - 1} \), the induction hypothesis implies\n\n\[ \left( {{\mathbf{x}}^{ * },0}\right) M\left( A\right) \left( \begin{array}{l} \mathbf{x} \\ 0 \end{array}\right) = {\mathbf{x}}^{ * }M{\left( A\right) }_{n - 1}\mathbf{x} = \left( {M\left( A\right) \mathbf{x},\mathbf{x}}\right) > 0. \]\n\nNow the dimension of \( \left\{ {\mathbf{z} \in {\mathbb{C}}^{n} : {z}_{n} = 0}\right\} \) is \( n - 1 \) and the dimension of \( \operatorname{span}\left( {{\mathbf{u}}_{1},{\mathbf{u}}_{2}}\right) = 2 \) and so there must be some nonzero \( \mathbf{x} \in {\mathbb{C}}^{n} \) which is in both of these subspaces of \( {\mathbb{C}}^{n} \) . However, the first computation would require that \( \left( {M\left( A\right) \mathbf{x},\mathbf{x}}\right) < 0 \) while the second would require that \( \left( {M\left( A\right) \mathbf{x},\mathbf{x}}\right) > 0 \) . This contradiction shows that all the eigenvalues must be positive. This proves the if part of the theorem. The only if part is left to the reader.
No
Corollary 13.4.7 Let \( X \) be a finite dimensional Hilbert space and let \( A \in \mathcal{L}\left( {X, X}\right) \) be self adjoint. Then \( A \) is negative definite if and only if \( \det \left( {M{\left( A\right) }_{k}}\right) {\left( -1\right) }^{k} > 0 \) for every \( k = 1,\cdots, n \) . Here \( M\left( A\right) \) denotes the matrix of \( A \) with respect to some fixed orthonormal basis of \( X \) .
Proof: This is immediate from the above theorem by noting that, as in the proof of Lemma [13,4,4, \( A \) is negative definite if and only if \( - A \) is positive definite. Therefore, if \( \det \left( {-M{\left( A\right) }_{k}}\right) > 0 \) for all \( k = 1,\cdots, n \), it follows that \( A \) is negative definite. However, \( \det \left( {-M{\left( A\right) }_{k}}\right) = {\left( -1\right) }^{k}\det \left( {M{\left( A\right) }_{k}}\right) \) . ∎
Yes
Lemma 13.6.1 Suppose \( R \in \mathcal{L}\left( {X, Y}\right) \) where \( X, Y \) are Hilbert spaces and \( R \) preserves distances. Then \( {R}^{ * }R = I \) .
Proof: Since \( R \) preserves distances, \( \left| {R\mathbf{x}}\right| = \left| \mathbf{x}\right| \) for every \( \mathbf{x} \) . Therefore from the axioms of the inner product,\n\n\[ \n{\left| \mathbf{x}\right| }^{2} + {\left| \mathbf{y}\right| }^{2} + \left( {\mathbf{x},\mathbf{y}}\right) + \left( {\mathbf{y},\mathbf{x}}\right) = {\left| \mathbf{x} + \mathbf{y}\right| }^{2} = \left( {R\left( {\mathbf{x} + \mathbf{y}}\right), R\left( {\mathbf{x} + \mathbf{y}}\right) }\right) \n\]\n\n\[ \n= \left( {R\mathbf{x}, R\mathbf{x}}\right) + \left( {R\mathbf{y}, R\mathbf{y}}\right) + \left( {R\mathbf{x}, R\mathbf{y}}\right) + \left( {R\mathbf{y}, R\mathbf{x}}\right) \n\]\n\n\[ \n= {\left| \mathbf{x}\right| }^{2} + {\left| \mathbf{y}\right| }^{2} + \left( {{R}^{ * }R\mathbf{x},\mathbf{y}}\right) + \left( {\mathbf{y},{R}^{ * }R\mathbf{x}}\right) \n\]\n\nand so for all \( \mathbf{x},\mathbf{y} \) ,\n\n\[ \n\left( {{R}^{ * }R\mathbf{x} - \mathbf{x},\mathbf{y}}\right) + \left( {\mathbf{y},{R}^{ * }R\mathbf{x} - \mathbf{x}}\right) = 0 \n\]\n\nHence for all \( \mathbf{x},\mathbf{y} \) ,\n\n\[ \n\operatorname{Re}\left( {{R}^{ * }R\mathbf{x} - \mathbf{x},\mathbf{y}}\right) = 0 \n\]\n\nNow for \( \mathbf{x},\mathbf{y} \) given, choose \( \alpha \in \mathbb{C} \) such that\n\n\[ \n\alpha \left( {{R}^{ * }R\mathbf{x} - \mathbf{x},\mathbf{y}}\right) = \left| \left( {{R}^{ * }R\mathbf{x} - \mathbf{x},\mathbf{y}}\right) \right| \n\]\n\nThen\n\n\[ \n0 = \operatorname{Re}\left( {{R}^{ * }R\mathbf{x} - \mathbf{x},\bar{\alpha }\mathbf{y}}\right) = \operatorname{Re}\alpha \left( {{R}^{ * }R\mathbf{x} - \mathbf{x},\mathbf{y}}\right) \n\]\n\n\[ \n= \left| \left( {{R}^{ * }R\mathbf{x} - \mathbf{x},\mathbf{y}}\right) \right| \n\]\n\nThus \( \left| \left( {{R}^{ * }R\mathbf{x} - \mathbf{x},\mathbf{y}}\right) \right| = 0 \) for all \( \mathbf{x},\mathbf{y} \) because the given \( \mathbf{x},\mathbf{y} \) were arbitrary. Let \( \mathbf{y} = \) \( {R}^{ * }R\mathbf{x} - \mathbf{x} \) to conclude that for all \( \mathbf{x} \) ,\n\n\[ \n{R}^{ * }R\mathbf{x} - \mathbf{x} = \mathbf{0} \n\]\n\nwhich says \( {R}^{ * }R = I \) since \( \mathbf{x} \) is arbitrary. -
Yes
Corollary 13.6.3 Let \( F \in \mathcal{L}\left( {X, Y}\right) \) and suppose \( n \geq m \) where \( X \) is a Hilbert space of dimension \( n \) and \( Y \) is a Hilbert space of dimension \( m \) . Then there exists a Hermitian \( U \in \) \( \mathcal{L}\left( {X, X}\right) \), and an element of \( \mathcal{L}\left( {X, Y}\right), R \), such that \[ F = {UR}, R{R}^{ * } = I\text{.} \]
Proof: Recall that \( {L}^{* * } = L \) and \( {\left( ML\right) }^{ * } = {L}^{ * }{M}^{ * } \) . Now apply Theorem 13.6. to \( {F}^{ * } \in \mathcal{L}\left( {Y, X}\right) \) . Thus, \[ {F}^{ * } = {R}^{ * }U \] where \( {R}^{ * } \) and \( U \) satisfy the conditions of that theorem. Then \[ F = {UR} \] and \( R{R}^{ * } = {R}^{* * }{R}^{ * } = I \) . \( \blacksquare \)
Yes
Theorem 13.6.5 Let \( F \in \mathcal{L}\left( {X, X}\right) \) . Then \( F \) is normal if and only if in Corollary 13.6.4 \( {RU} = {UR} \) and \( {QW} = {WQ} \) .
Proof: I will prove the statement about \( {RU} = {UR} \) and leave the other part as an exercise. First suppose that \( {RU} = {UR} \) and show \( F \) is normal. To begin with,\n\n\[ U{R}^{ * } = {\left( RU\right) }^{ * } = {\left( UR\right) }^{ * } = {R}^{ * }U. \]\n\nTherefore,\n\n\[ {F}^{ * }F = U{R}^{ * }{RU} = {U}^{2} \]\n\n\[ F{F}^{ * } = {RUU}{R}^{ * } = {UR}{R}^{ * }U = {U}^{2} \]\n\nwhich shows \( F \) is normal.\n\nNow suppose \( F \) is normal. Is \( {RU} = {UR} \) ? Since \( F \) is normal,\n\n\[ F{F}^{ * } = {RUU}{R}^{ * } = R{U}^{2}{R}^{ * } \]\n\nand\n\n\[ {F}^{ * }F = U{R}^{ * }{RU} = {U}^{2}. \]\n\nTherefore, \( R{U}^{2}{R}^{ * } = {U}^{2} \), and both are nonnegative and self adjoint. Therefore, the square roots of both sides must be equal by the uniqueness part of the theorem on fractional powers. It follows that the square root of the first, \( {RU}{R}^{ * } \) must equal the square root of the second, \( U \) . Therefore, \( {RU}{R}^{ * } = U \) and so \( {RU} = {UR} \) . This proves the theorem in one case. The other case in which \( W \) and \( Q \) commute is left as an exercise.
No
Lemma 13.8.1 Let \( A \) be an \( m \times n \) matrix. Then \( {A}^{ * }A \) is self adjoint and all its eigenvalues are nonnegative.
Proof: It is obvious that \( {A}^{ * }A \) is self adjoint. Suppose \( {A}^{ * }A\mathbf{x} = \lambda \mathbf{x} \) . Then \( \lambda {\left| \mathbf{x}\right| }^{2} = \) \( \left( {\lambda \mathbf{x},\mathbf{x}}\right) = \left( {{A}^{ * }A\mathbf{x},\mathbf{x}}\right) = \left( {A\mathbf{x}, A\mathbf{x}}\right) \geq 0 \) . ∎
Yes
Theorem 13.8.3 Let \( A \) be an \( m \times n \) matrix. Then there exist unitary matrices, \( U \) and \( V \) of the appropriate size such that\n\n\[ \n{U}^{ * }{AV} = \left( \begin{array}{ll} \sigma & 0 \\ 0 & 0 \end{array}\right)\n\]\nwhere \( \sigma \) is of the form\n\n\[ \n\sigma = \left( \begin{matrix} {\sigma }_{1} & & 0 \\ & \ddots & \\ 0 & & {\sigma }_{k} \end{matrix}\right)\n\]\n\nfor the \( {\sigma }_{i} \) the singular values of \( A \), arranged in order of decreasing size.
Proof: By the above lemma and Theorem [13,3,3] there exists an orthonormal basis, \( {\left\{ {\mathbf{v}}_{i}\right\} }_{i = 1}^{n} \) such that \( {A}^{ * }A{\mathbf{v}}_{i} = {\sigma }_{i}^{2}{\mathbf{v}}_{i} \) where \( {\sigma }_{i}^{2} > 0 \) for \( i = 1,\cdots, k,\left( {{\sigma }_{i} > 0}\right) \), and equals zero if \( i > k \) . Thus for \( i > k, A{\mathbf{v}}_{i} = \mathbf{0} \) because\n\n\[ \n\left( {A{\mathbf{v}}_{i}, A{\mathbf{v}}_{i}}\right) = \left( {{A}^{ * }A{\mathbf{v}}_{i},{\mathbf{v}}_{i}}\right) = \left( {\mathbf{0},{\mathbf{v}}_{i}}\right) = 0.\n\]\n\nFor \( i = 1,\cdots, k \), define \( {\mathbf{u}}_{i} \in {\mathbb{F}}^{m} \) by\n\n\[ \n{\mathbf{u}}_{i} \equiv {\sigma }_{i}^{-1}A{\mathbf{v}}_{i}\n\]\n\nThus \( A{\mathbf{v}}_{i} = {\sigma }_{i}{\mathbf{u}}_{i} \) . Now\n\n\[ \n\left( {{\mathbf{u}}_{i},{\mathbf{u}}_{j}}\right) = \left( {{\sigma }_{i}^{-1}A{\mathbf{v}}_{i},{\sigma }_{j}^{-1}A{\mathbf{v}}_{j}}\right) = \left( {{\sigma }_{i}^{-1}{\mathbf{v}}_{i},{\sigma }_{j}^{-1}{A}^{ * }A{\mathbf{v}}_{j}}\right)\n\]\n\n\[ \n= \left( {{\sigma }_{i}^{-1}{\mathbf{v}}_{i},{\sigma }_{j}^{-1}{\sigma }_{j}^{2}{\mathbf{v}}_{j}}\right) = \frac{{\sigma }_{j}}{{\sigma }_{i}}\left( {{\mathbf{v}}_{i},{\mathbf{v}}_{j}}\right) = {\delta }_{ij}.\n\]\n\nThus \( {\left\{ {\mathbf{u}}_{i}\right\} }_{i = 1}^{k} \) is an orthonormal set of vectors in \( {\mathbb{F}}^{m} \) . Also,\n\n\[ \nA{A}^{ * }{\mathbf{u}}_{i} = A{A}^{ * }{\sigma }_{i}^{-1}A{\mathbf{v}}_{i} = {\sigma }_{i}^{-1}A{A}^{ * }A{\mathbf{v}}_{i} = {\sigma }_{i}^{-1}A{\sigma }_{i}^{2}{\mathbf{v}}_{i} = {\sigma }_{i}^{2}{\mathbf{u}}_{i}.\n\]\n\nNow extend \( {\left\{ {\mathbf{u}}_{i}\right\} }_{i = 1}^{k} \) to an orthonormal basis for all of \( {\mathbb{F}}^{m},{\left\{ {\mathbf{u}}_{i}\right\} }_{i = 1}^{m} \) and let\n\n\[ \nU \equiv \left( \begin{array}{lll} {\mathbf{u}}_{1} & \cdots & {\mathbf{u}}_{m} \end{array}\right)\n\]\n\nwhile\n\n\[ \nV \equiv \left( \begin{array}{lll} {\mathbf{v}}_{1} & \cdots & {\mathbf{v}}_{n} \end{array}\right) .\n\]\n\nThus \( U \) is the matrix which has the \( {\mathbf{u}}_{i} \) as columns and \( V \) is defined as the matrix which has\nthe \( {\mathbf{v}}_{i} \) as columns. Then\n\n\[ \n{U}^{ * }{AV} = \left( \begin{matrix} {\mathbf{u}}_{1}^{ * } \\ \vdots \\ {\mathbf{u}}_{k}^{ * } \\ \vdots \\ {\mathbf{u}}_{m}^{ * } \end{matrix}\right) A\left( \begin{array}{lll} {\mathbf{v}}_{1} & \cdots & {\mathbf{v}}_{n} \end{array}\right)\n\]\n\n\[ \n= \left( \begin{matrix} {\mathbf{u}}_{1}^{ * } \\ \vdots \\ {\mathbf{u}}_{k}^{ * } \\ \vdots \\ {\mathbf{u}}_{m}^{ * } \end{matrix}\right) \left( \begin{array}{llllll} {\sigma }_{1}{\mathbf{u}}_{1} & \cdots & {\sigma }_{k}{\mathbf{u}}_{k} & \mathbf{0} & \cdots & \mathbf{0} \end{array}\right) = \left( \begin{array}{ll} \sigma & 0 \\ 0 & 0 \end{array}\right)\n\]\n\nwhere \( \sigma \) is given in the statement of the theorem. -
Yes
Corollary 13.8.4 Let \( A \) be an \( m \times n \) matrix. Then the rank of \( A \) and \( {A}^{ * } \) equals the number of singular values.
Proof: Since \( V \) and \( U \) are unitary, they are each one to one and onto and so it follows\n\nthat\n\[\n\operatorname{rank}\left( A\right) = \operatorname{rank}\left( {{U}^{ * }{AV}}\right) = \operatorname{rank}\left( \begin{array}{ll} \sigma & 0 \\ 0 & 0 \end{array}\right) = \text{ number of singular values. }\n\]\n\nAlso since \( U, V \) are unitary,\n\n\[\n\operatorname{rank}\left( {A}^{ * }\right) = \operatorname{rank}\left( {{V}^{ * }{A}^{ * }U}\right) = \operatorname{rank}\left( {\left( {U}^{ * }AV\right) }^{ * }\right)\n\]\n\n\[\n= \operatorname{rank}\left( {\left( \begin{array}{ll} \sigma & 0 \\ 0 & 0 \end{array}\right) }^{ * }\right) = \text{ number of singular values. }\blacksquare\n\]
Yes
Lemma 13.9.2 Let \( A \) be an \( m \times n \) complex matrix with singular matrix\n\n\[ \sum = \left( \begin{array}{ll} \sigma & 0 \\ 0 & 0 \end{array}\right) \]\n\nwith \( \sigma \) as defined above. Then\n\n\[ \parallel \sum {\parallel }_{F}^{2} = \parallel A{\parallel }_{F}^{2} \]
Proof: From the definition and letting \( U, V \) be unitary and of the right size,\n\n\[ \parallel {UA}{\parallel }_{F}^{2} \equiv \operatorname{trace}\left( {{UA}{A}^{ * }{U}^{ * }}\right) = \operatorname{trace}\left( {A{A}^{ * }}\right) = \parallel A{\parallel }_{F}^{2} \]\n\nAlso,\n\n\[ \parallel {AV}{\parallel }_{F}^{2} \equiv \operatorname{trace}\left( {{AV}{V}^{ * }{A}^{ * }}\right) = \operatorname{trace}\left( {A{A}^{ * }}\right) = \parallel A{\parallel }_{F}^{2}. \]\n\nIt follows\n\n\[ \parallel {UAV}{\parallel }_{F}^{2} = \parallel {AV}{\parallel }_{F}^{2} = \parallel A{\parallel }_{F}^{2}. \]\n\nNow consider (13.20). From what was just shown,\n\n\[ \parallel A{\parallel }_{F}^{2} = {\begin{Vmatrix}U\sum {V}^{ * }\end{Vmatrix}}_{F}^{2} = \parallel \sum {\parallel }_{F}^{2}.\blacksquare \]
Yes
Proposition 13.11.2 \( {A}^{ + }\mathbf{y} \) is the solution to the problem of minimizing \( \left| {A\mathbf{x} - \mathbf{y}}\right| \) for all \( \mathbf{x} \) which has smallest norm. Thus\n\n\[ \left| {A{A}^{ + }\mathbf{y} - \mathbf{y}}\right| \leq \left| {A\mathbf{x} - \mathbf{y}}\right| \text{ for all }\mathbf{x} \]\n\nand if \( {\mathbf{x}}_{1} \) satisfies \( \left| {A{\mathbf{x}}_{1} - \mathbf{y}}\right| \leq \left| {A\mathbf{x} - \mathbf{y}}\right| \) for all \( \mathbf{x} \), then \( \left| {{A}^{ + }\mathbf{y}}\right| \leq \left| {\mathbf{x}}_{1}\right| \) .
Proof: Consider \( \mathbf{x} \) satisfying [13.22], equivalently \( {A}^{ * }A\mathbf{x} = {A}^{ * }\mathbf{y} \), \n\n\[ \left( \begin{matrix} {\sigma }^{2} & 0 \\ 0 & 0 \end{matrix}\right) {V}^{ * }\mathbf{x} = \left( \begin{array}{ll} \sigma & 0 \\ 0 & 0 \end{array}\right) {U}^{ * }\mathbf{y} \]\n\nwhich has smallest norm. This is equivalent to making \( \left| {{V}^{ * }\mathbf{x}}\right| \) as small as possible because \( {V}^{ * } \) is unitary and so it preserves norms. For \( \mathbf{z} \) a vector, denote by \( {\left( \mathbf{z}\right) }_{k} \) the vector in \( {\mathbb{F}}^{k} \) which consists of the first \( k \) entries of \( \mathbf{z} \) . Then if \( \mathbf{x} \) is a solution to (13.22)\n\n\[ \left( \begin{matrix} {\sigma }^{2}{\left( {V}^{ * }\mathbf{x}\right) }_{k} \\ \mathbf{0} \end{matrix}\right) = \left( \begin{matrix} \sigma {\left( {U}^{ * }\mathbf{y}\right) }_{k} \\ \mathbf{0} \end{matrix}\right) \]\n\nand so \( {\left( {V}^{ * }\mathbf{x}\right) }_{k} = {\sigma }^{-1}{\left( {U}^{ * }\mathbf{y}\right) }_{k} \) . Thus the first \( k \) entries of \( {V}^{ * }\mathbf{x} \) are determined. In order to make \( \left| {{V}^{ * }\mathbf{x}}\right| \) as small as possible, the remaining \( n - k \) entries should equal zero. Therefore,\n\n\[ {V}^{ * }\mathbf{x} = \left( \begin{matrix} {\left( {V}^{ * }\mathbf{x}\right) }_{k} \\ 0 \end{matrix}\right) = \left( \begin{matrix} {\sigma }^{-1}{\left( {U}^{ * }\mathbf{y}\right) }_{k} \\ 0 \end{matrix}\right) = \left( \begin{matrix} {\sigma }^{-1} & 0 \\ 0 & 0 \end{matrix}\right) {U}^{ * }\mathbf{y} \]\n\nand so\n\n\[ \mathbf{x} = V\left( \begin{matrix} {\sigma }^{-1} & 0 \\ 0 & 0 \end{matrix}\right) {U}^{ * }\mathbf{y} \equiv {A}^{ + }\mathbf{y}\blacksquare \]
Yes
Lemma 13.11.3 The matrix \( {A}^{ + } \) satisfies the following conditions.\n\n\[ A{A}^{ + }A = A,{A}^{ + }A{A}^{ + } = {A}^{ + },{A}^{ + }A\\text{and}A{A}^{ + }\\text{are Hermitian.} \]
Proof: This is routine. Recall\n\n\[ A = U\\left( \\begin{array}{ll} \\sigma & 0 \\\\ 0 & 0 \\end{array}\\right) {V}^{ * }\n\n\\text{and}\n\n{A}^{ + } = V\\left( \\begin{matrix} {\\sigma }^{-1} & 0 \\\\ 0 & 0 \\end{matrix}\\right) {U}^{ * }\n\n\\text{so you just plug in and verify it works.}
No
Corollary 14.0.7 If \( \left( {X,\parallel \cdot \parallel }\right) \) is a finite dimensional normed linear space with the field of scalars \( \mathbb{F} = \mathbb{C} \) or \( \mathbb{R} \), then \( X \) is complete.
Proof: Let \( \left\{ {\mathbf{x}}^{k}\right\} \) be a Cauchy sequence. Then letting the components of \( {\mathbf{x}}^{k} \) with respect to the given basis be\n\n\[ \n{x}_{1}^{k},\cdots ,{x}_{n}^{k} \n\]\n\nit follows from Theorem 14.0.6, that\n\n\[ \n\left( {{x}_{1}^{k},\cdots ,{x}_{n}^{k}}\right) \n\]\n\nis a Cauchy sequence in \( {\mathbb{F}}^{n} \) and so\n\n\[ \n\left( {{x}_{1}^{k},\cdots ,{x}_{n}^{k}}\right) \rightarrow \left( {{x}_{1},\cdots ,{x}_{n}}\right) \in {\mathbb{F}}^{n}. \n\]\n\nThus,\n\n\[ \n{\mathbf{x}}^{k} = \mathop{\sum }\limits_{{i = 1}}^{n}{x}_{i}^{k}{\mathbf{v}}_{i} \rightarrow \mathop{\sum }\limits_{{i = 1}}^{n}{x}_{i}{\mathbf{v}}_{i} \in X.\blacksquare \n\]
Yes
Corollary 14.0.8 Suppose \( X \) is a finite dimensional linear space with the field of scalars either \( \mathbb{C} \) or \( \mathbb{R} \) and \( \parallel \cdot \parallel \) and \( \parallel \parallel \cdot \parallel \parallel \) are two norms on \( X \) . Then there exist positive constants, \( \delta \) and \( \Delta \), independent of \( \mathbf{x} \in X \) such that\n\n\[ \delta \parallel \left| \mathbf{x}\right| \parallel \leq \parallel \mathbf{x}\parallel \leq \Delta \parallel \parallel \mathbf{x}\parallel \parallel . \]\n\nThus any two norms are equivalent.
Proof: Let \( \left\{ {{\mathbf{v}}_{1},\cdots ,{\mathbf{v}}_{n}}\right\} \) be a basis for \( X \) and let \( \left| \cdot \right| \) be the norm taken with respect to this basis which was described earlier. Then by Theorem 14.0.6, there are positive constants \( {\delta }_{1},{\Delta }_{1},{\delta }_{2},{\Delta }_{2} \), all independent of \( \mathbf{x} \in X \) such that\n\n\[ {\delta }_{2}\parallel \left| \mathbf{x}\right| \parallel \leq \left| \mathbf{x}\right| \leq {\Delta }_{2}\parallel \left| \mathbf{x}\right| \parallel \]\n\n\[ {\delta }_{1}\parallel \mathbf{x}\parallel \leq \left| \mathbf{x}\right| \leq {\Delta }_{1}\parallel \mathbf{x}\parallel \]\n\nThen\n\n\[ {\delta }_{2}\parallel \left| \mathbf{x}\right| \parallel \leq \left| \mathbf{x}\right| \leq {\Delta }_{1}\parallel \mathbf{x}\parallel \leq \frac{{\Delta }_{1}}{{\delta }_{1}}\left| \mathbf{x}\right| \leq \frac{{\Delta }_{1}{\Delta }_{2}}{{\delta }_{1}}\parallel \left| \mathbf{x}\right| \parallel \]\n\nand so\n\n\[ \frac{{\delta }_{2}}{{\Delta }_{1}}\parallel \left| \mathbf{x}\right| \parallel \leq \parallel \mathbf{x}\parallel \leq \frac{{\Delta }_{2}}{{\delta }_{1}}\parallel \left| \mathbf{x}\right| \parallel \]
Yes
Theorem 14.0.10 Let \( X \) and \( Y \) be finite dimensional normed linear spaces of dimension \( n \) and \( m \) respectively and denote by \( \parallel \cdot \parallel \) the norm on either \( X \) or \( Y \) . Then if \( A \) is any linear function mapping \( X \) to \( Y \), then \( A \in \mathcal{L}\left( {X, Y}\right) \) and \( \left( {\mathcal{L}\left( {X, Y}\right) ,\parallel \cdot \parallel }\right) \) is a complete normed linear space of dimension \( \mathrm{{nm}} \) with\n\n\[ \parallel A\mathbf{x}\parallel \leq \parallel A\parallel \parallel \mathbf{x}\parallel \]
Proof: It is necessary to show the norm defined on linear transformations really is a norm. Again the first and third properties listed above for norms are obvious. It remains to show the second and verify \( \parallel A\parallel < \infty \) . Letting \( \left\{ {{\mathbf{v}}_{1},\cdots ,{\mathbf{v}}_{n}}\right\} \) be a basis and \( \left| \cdot \right| \) defined with respect to this basis as above, there exist constants \( \delta ,\Delta > 0 \) such that\n\n\[ \delta \parallel \mathbf{x}\parallel \leq \left| \mathbf{x}\right| \leq \Delta \parallel \mathbf{x}\parallel \]\n\nThen,\n\n\[ \parallel A + B\parallel \equiv \sup \{ \parallel \left( {A + B}\right) \left( \mathbf{x}\right) \parallel : \parallel \mathbf{x}\parallel \leq 1\} \]\n\n\[ \leq \sup \{ \parallel A\mathbf{x}\parallel : \parallel \mathbf{x}\parallel \leq 1\} + \sup \{ \parallel B\mathbf{x}\parallel : \parallel \mathbf{x}\parallel \leq 1\} \]\n\n\[ \equiv \parallel A\parallel + \parallel B\parallel \text{.} \]\n\nNext consider the claim that \( \parallel A\parallel < \infty \) . This follows from\n\n\[ \parallel A\left( \mathbf{x}\right) \parallel = \begin{Vmatrix}{A\left( {\mathop{\sum }\limits_{{i = 1}}^{n}{x}_{i}{\mathbf{v}}_{i}}\right) }\end{Vmatrix} \leq \mathop{\sum }\limits_{{i = 1}}^{n}\left| {x}_{i}\right| \begin{Vmatrix}{A\left( {\mathbf{v}}_{i}\right) }\end{Vmatrix} \]\n\n\[ \leq \left| \mathbf{x}\right| {\left( \mathop{\sum }\limits_{{i = 1}}^{n}{\begin{Vmatrix}A\left( {\mathbf{v}}_{i}\right) \end{Vmatrix}}^{2}\right) }^{1/2} \leq \Delta \parallel \mathbf{x}\parallel {\left( \mathop{\sum }\limits_{{i = 1}}^{n}{\begin{Vmatrix}A\left( {\mathbf{v}}_{i}\right) \end{Vmatrix}}^{2}\right) }^{1/2} < \infty . \]\n\nThus \( \parallel A\parallel \leq \Delta {\left( \mathop{\sum }\limits_{{i = 1}}^{n}{\begin{Vmatrix}A\left( {\mathbf{v}}_{i}\right) \end{Vmatrix}}^{2}\right) }^{1/2} \) .
Yes
Proposition 14.0.13 The following holds.\n\n\[ \parallel A{\parallel }_{2} = \sup \{ \left| {A\mathbf{x}}\right| : \left| \mathbf{x}\right| = 1\} \equiv \parallel A\parallel . \]
Proof: Note that \( {A}^{ * }A \) is Hermitian and so by Corollary 13.3.5,\n\n\[ \parallel A{\parallel }_{2} = \max \left\{ {{\left( {A}^{ * }A\mathbf{x},\mathbf{x}\right) }^{1/2} : \left| \mathbf{x}\right| = 1}\right\} \]\n\n\[ = \max \left\{ {{\left( A\mathbf{x}, A\mathbf{x}\right) }^{1/2} : \left| \mathbf{x}\right| = 1}\right\} \]\n\n\[ = \max \{ \left| {A\mathbf{x}}\right| : \left| \mathbf{x}\right| = 1\} = \parallel A\parallel \text{. ∎} \]
Yes
Theorem 14.0.15 Let \( {X}_{i} \) and \( \parallel \cdot {\parallel }_{i} \) be given in the above definition and consider the norms on \( \mathop{\prod }\limits_{{i = 1}}^{n}{X}_{i} \) described there in terms of norms on \( {\mathbb{R}}^{n} \) . Then any two of these norms on \( \mathop{\prod }\limits_{{i = 1}}^{n}{X}_{i} \) obtained in this way are equivalent.
For example, define\n\n\[ \parallel \mathbf{x}{\parallel }_{1} \equiv \mathop{\sum }\limits_{{i = 1}}^{n}\left| {x}_{i}\right| \]\n\n\[ \parallel \mathbf{x}{\parallel }_{\infty } \equiv \max \left\{ {\left| {x}_{i}\right|, i = 1,\cdots, n}\right\} \]\n\nor\n\n\[ \parallel \mathbf{x}{\parallel }_{2} = {\left( \mathop{\sum }\limits_{{i = 1}}^{n}{\left| {x}_{i}\right| }^{2}\right) }^{1/2} \]\n\nand all three are equivalent norms on \( \mathop{\prod }\limits_{{i = 1}}^{n}{X}_{i} \).
No
Lemma 14.1.3 If \( a, b \geq 0 \) and \( {p}^{\prime } \) is defined by \( \frac{1}{p} + \frac{1}{{p}^{\prime }} = 1 \), then\n\n\[ {ab} \leq \frac{{a}^{p}}{p} + \frac{{b}^{{p}^{\prime }}}{{p}^{\prime }}. \]\n
Proof of the Proposition: If \( \mathbf{x} \) or \( \mathbf{y} \) equals the zero vector there is nothing to prove. Therefore, assume they are both nonzero. Let \( A = {\left( \mathop{\sum }\limits_{{i = 1}}^{n}{\left| {x}_{i}\right| }^{p}\right) }^{1/p} \) and \( B = \) \( {\left( \mathop{\sum }\limits_{{i = 1}}^{n}{\left| {y}_{i}\right| }^{{p}^{\prime }}\right) }^{1/{p}^{\prime }} \) . Then using Lemma 14.1.3,\n\n\[ \mathop{\sum }\limits_{{i = 1}}^{n}\frac{\left| {x}_{i}\right| }{A}\frac{\left| {y}_{i}\right| }{B} \leq \mathop{\sum }\limits_{{i = 1}}^{n}\left\lbrack {\frac{1}{p}{\left( \frac{\left| {x}_{i}\right| }{A}\right) }^{p} + \frac{1}{{p}^{\prime }}{\left( \frac{\left| {y}_{i}\right| }{B}\right) }^{{p}^{\prime }}}\right\rbrack \]\n\n\[ = \frac{1}{p}\frac{1}{{A}^{p}}\mathop{\sum }\limits_{{i = 1}}^{n}{\left| {x}_{i}\right| }^{p} + \frac{1}{{p}^{\prime }}\frac{1}{{B}^{p}}\mathop{\sum }\limits_{{i = 1}}^{n}{\left| {y}_{i}\right| }^{{p}^{\prime }} \]\n\n\[ = \frac{1}{p} + \frac{1}{{p}^{\prime }} = 1 \]\n\nand so\n\n\[ \mathop{\sum }\limits_{{i = 1}}^{n}\left| {x}_{i}\right| \left| {y}_{i}\right| \leq {AB} = {\left( \mathop{\sum }\limits_{{i = 1}}^{n}{\left| {x}_{i}\right| }^{p}\right) }^{1/p}{\left( \mathop{\sum }\limits_{{i = 1}}^{n}{\left| {y}_{i}\right| }^{{p}^{\prime }}\right) }^{1/{p}^{\prime }}.\blacksquare \]\n
Yes