text
stringlengths
2
132k
source
dict
T , and therefore x = 1 det A  C11 C21 · · · Cn1 C12 C22 · · · Cn2 ... ... . . . ... C1n C2n · · · Cnn  b1 b2 ... bn  . Consider the first component x1 of x: x1 = 1 det A (b1C11 + b2C21 + · · · + bnCn1). The expression b1C11 + b2C21 + · · · + bnCn1 is the expansion of the determinant along the first column of the matrix obtained from A by replacing the first column with b:det  b1 a12 · · · a1n b2 a22 · · · a2n ... ... . . . ... bn an2 · · · ann  = b1C11 + b2C21 + · · · + bnCn1 Similarly, x2 = 1 det A(b1C12 + b2C22 + · · · + bnCn2)and ( b1C12 + b2C22 + · · · + bnCn2) is the expansion of the determinant along the second column of the matrix obtained from A by replacing the second column with b. In summary: Theorem 13.3: (Cramer’s Rule) Let A ∈ Rn×n be an invertible matrix. Let b ∈ Rn and let Ai be the matrix obtained from A by replacing the ith column with b. Then the solution to Ax = b is x = 1 det A  det A1 det A2 ...det An  . Although this is an explicit and elegant formula for x, it is computationally intensive, and used mainly for theoretical purposes. > 106 Lecture 13 # 13.3 Volumes The volume of the parallelepiped determined by the vectors v1, v2, v3 is Vol( v1, v2, v3) = abs( vT > 1 (v2 × v3)) = abs(det [v1 v2 v3 ])where abs( x) denotes the absolute value of
{ "page_id": null, "source": 6828, "title": "from dpo" }
the number x. Let A be an invertible matrix and let w1 = Av 1, w2 = Av 2, w3 = Av 3. How are Vol( v1, v2, v2) and Vol( w1, w2, w2) related? Compute: Vol( w1, w2, w3) = abs(det [w1 w2 w3 ])= abs (det [Av 1 Av 2 Av 3 ]) = abs (det( A [v1 v2 v3 ])) = abs (det A · det [v1 v2 v3 ]) = abs(det A) · Vol( v1, v2, v3). Therefore, the number abs(det A) is the factor by which volume is changed under the linear transformation with matrix A. In summary: Theorem 13.4: Suppose that v1, v2, v3 are vectors in R3 that determine a parallelepiped of non-zero volume. Let A be the matrix of a linear transformation and let w1, w2, w3 be the images of v1, v2, v3 under A, respectively. Then Vol( w1, w2, w3) = abs(det A) · Vol( v1, v2, v3). Example 13.5. Consider the data A =  4 1 −12 4 11 1 4  , v1 =  1 −10  , v2 =  012  , v3 =  −151  . and let w1 = Av 1, w2 = Av 2, and w3 = Av 3. Find the volume of the parallelepiped spanned by the vectors {w1, w2, w3}. Solution. We compute: Vol( v1, v2, v3) = abs(det( [v1 v2 v3 ])) = abs( −7) = 7 We compute: det( A) = 55 . Therefore, the volume of the parallelepiped spanned by the vectors {w1, w2, w3} is Vol( w1, w2, w3) = abs(55) × 7 = 385 . > 107 Applications of the Determinant After this lecture you should know the following: • what the Cofactor Method is • what Cramer’s Rule is • the geometric
{ "page_id": null, "source": 6828, "title": "from dpo" }
interpretation of the determinant (volume) > 108 # Lecture 14 Vector Spaces # 14.1 Vector Spaces When you read/hear the word vector you may immediately think of two points in R2 (or R3) connected by an arrow. Mathematically speaking, a vector is just an element of a vector space . This then begs the question: What is a vector space ? Roughly speaking, a vector space is a set of objects that can be added and multiplied by scalars . You have already worked with several types of vector spaces. Examples of vector spaces that you have already encountered are: 1. the set Rn,2. the set of all n × n matrices, 3. the set of all functions from [ a, b ] to R, and 4. the set of all sequences. In all of these sets, there is an operation of “addition“ and “multiplication by scalars”. Let’s formalize then exactly what we mean by a vector space. Definition 14.1: A vector space is a set V of objects, called vectors , on which two operations called addition and scalar multiplication have been defined satisfying the following properties. If u, v, w are in V and if α, β ∈ R are scalars: (1) The sum u + v is in V. (closure under addition) (2) u + v = v + u (addition is commutative) (3) (u + v) + w = u + ( v + w) (addition is associativity) (4) There is a vector in V called the zero vector, denoted by 0, satisfying v + 0 = v. (5) For each v there is a vector −v in V such that v + ( −v) = 0.Vector Spaces (6) The scalar multiple of v by α, denoted αv, is in V. (closure under scalar multiplica-tion) (7)
{ "page_id": null, "source": 6828, "title": "from dpo" }
α(u + v) = αu + αv (8) (α + β)v = αv + βv (9) α(βv) = ( αβ )v (10) 1v = v It can be shown that 0 · v = 0 for any vector v in V. To better understand the definition of a vector space, we first consider a few elementary examples. Example 14.2. Let V be the unit disc in R2: V = {(x, y ) ∈ R2 | x2 + y2 ≤ 1} Is V a vector space? Solution. The circle is not closed under scalar multiplication. For example, take u = (1 , 0) ∈ V and multiply by say α = 2. Then αu = (2 , 0) is not in V. Therefore, property (6) of the definition of a vector space fails, and consequently the unit disc is not a vector space. Example 14.3. Let V be the graph of the quadratic function f (x) = x2: V = { (x, y ) ∈ R2 | y = x2 } . Is V a vector space? Solution. The set V is not closed under scalar multiplication. For example, u = (1 , 1) is a point in V but 2 u = (2 , 2) is not. You may also notice that V is not closed under addition either. For example, both u = (1 , 1) and v = (2 , 4) are in V but u + v = (3 , 5) and (3 , 5) is not a point on the parabola V. Therefore, the graph of f (x) = x2 is not a vector space. > 110 Lecture 14 Example 14.4. Let V be the graph of the function f (x) = 2 x: V = {(x, y ) ∈ R2 | y = 2 x}.
{ "page_id": null, "source": 6828, "title": "from dpo" }
Is V a vector space? Solution. We will show that V is a vector space. First, we verify that V is closed under addition. We first note that an arbitrary point in V can be written as u = ( x, 2x). Let then u = ( a, 2a) and v = ( b, 2b) be points in V. Then u + v = ( a + b, 2a + 2 b) = ( a + b, 2( a + b)) . Therefore V is closed under addition. Verify that V is closed under scalar multiplication: αu = α(a, 2a) = ( αa, α 2a) = ( αa, 2( αa )) . Therefore V is closed under scalar multiplication. There is a zero vector 0 = (0 , 0) in V: u + 0 = ( a, 2a) + (0 , 0) = ( a, 2a). All the other properties of a vector space can be verified to hold; for example, addition is commutative and associative in V because addition in R2 is commutative/associative, etc. Therefore, the graph of the function f (x) = 2 x is a vector space. The following example is important (it will appear frequently) and is our first example of what we could say is an “abstract vector space”. To emphasize, a vector space is a set that comes equipped with an operation of addition and scalar multiplication and these two operations satisfy the list of properties above. Example 14.5. Let V = Pn[t] be the set of all polynomials in the variable t and of degree at most n: Pn[t] = { a0 + a1t + a2t2 + · · · + antn | a0, a 1, . . . , a n ∈ R } . Is V a vector space? Solution. Let
{ "page_id": null, "source": 6828, "title": "from dpo" }
u(t) = u0 + u1t + · · · + untn and let v(t) = v0 + v1t + · · · + vntn be polynomials in V. We define the addition of u and v as the new polynomial ( u + v) as follows: (u + v)( t) = u(t) + v(t) = ( u0 + v0) + ( u1 + v1)t + · · · + ( un + vn)tn. > 111 Vector Spaces Then u + v is a polynomial of degree at most n and thus ( u + v) ∈ Pn[t], and therefore this shows that Pn[t] is closed under addition. Now let α be a scalar, define a new polynomial (αu) as follows: (αu)( t) = ( αu 0) + ( αu 1)t + · · · + ( αu n)tn Then ( αu) is a polynomial of degree at most n and thus ( αu) ∈ Pn[t]; hence, Pn[t] is closed under scalar multiplication. The 0 vector in Pn[t] is the zero polynomial 0(t) = 0. One can verify that all other properties of the definition of a vector space also hold; for example, addition is commutative and associative, etc. Thus Pn[t] is a vector space. Example 14.6. Let V = Mm×n be the set of all m × n matrices. Under the usual operations of addition of matrices and scalar multiplication, is Mn×m a vector space? Solution. Given matrices A, B ∈ Mm×n and a scalar α, we defined the sum A + B by adding entry-by-entry, and αA by multiplying each entry of A by α. It is clear that the space Mm×n is closed under these two operations. The 0 vector in Mm×n is the matrix of size m × n having all entries equal to zero. It can
{ "page_id": null, "source": 6828, "title": "from dpo" }
be verified that all other properties of the definition of a vector space also hold. Thus, the set Mm×n is a vector space. Example 14.7. The n-dimensional Euclidean space V = Rn under the usual operations of addition and scalar multiplication is vector space. Example 14.8. Let V = C[a, b ] denote the set of functions with domain [ a, b ] and co-domain R that are continuous. Is V a vector space? # 14.2 Subspaces of Vector Spaces Frequently, one encounters a vector space W that is a subset of a larger vector space V. In this case, we would say that W is a subspace of V. Below is the formal definition. Definition 14.9: Let V be a vector space. A subset W of V is called a subspace of V if it satisfies the following properties: (1) The zero vector of V is also in W. (2) W is closed under addition, that is, if u and v are in W then u + v is in W. (3) W is closed under scalar multiplication, that is, if u is in W and α is a scalar then αu is in W. Example 14.10. Let W be the graph of the function f (x) = 2 x: W = {(x, y ) ∈ R2 | y = 2 x}. Is W a subspace of V = R2? > 112 Lecture 14 Solution. If x = 0 then y = 2 · 0 = 0 and therefore 0 = (0 , 0) is in W. Let u = ( a, 2a) and v = ( b, 2b) be elements of W. Then u + v = ( a, 2a) + ( b, 2b) = ( a + b, 2a + 2 b) = ( a + b︸
{ "page_id": null, "source": 6828, "title": "from dpo" }
︷︷ ︸ > x , 2 ( a + b) ︸ ︷︷ ︸ > x ). Because the x and y components of u + v satisfy y = 2 x then u + v is inside in W. Thus, W is closed under addition. Let α be any scalar and let u = ( a, 2a) be an element of W. Then αu = ( αa, α 2a) = ( αa ︸︷︷︸ > x , 2 ( αa ) ︸︷︷ ︸ > x )Because the x and y components of αu satisfy y = 2 x then αu is an element of W, and thus W is closed under scalar multiplication. All three conditions of a subspace are satisfied for W and therefore W is a subspace of V. Example 14.11. Let W be the first quadrant in R2: W = {(x, y ) ∈ R2 | x ≥ 0, y ≥ 0}. Is W a subspace? Solution. The set W contains the zero vector and the sum of two vectors in W is again in W; you may want to verify this explicitly as follows: if u1 = ( x1, y 1) is in W then x1 ≥ 0and y1 ≥ 0, and similarly if u2 = ( x2, y 2) is in W then x2 ≥ 0 and y2 ≥ 0. Then the sum u1 + u2 = ( x1 + x2, y 1 + y2) has components x1 + y1 ≥ 0 and x2 + y2 ≥ 0 and therefore u1 + u2 is in W. However, W is not closed under scalar multiplication. For example if u = (1 , 1) and α = −1 then αu = ( −1, −1) is not in W because the components of αu are clearly not non-negative. Example
{ "page_id": null, "source": 6828, "title": "from dpo" }
14.12. Let V = Mn×n be the vector space of all n × n matrices. We define the trace of a matrix A ∈ Mn×n as the sum of its diagonal entries: tr( A) = a11 + a22 + · · · + ann . Let W be the set of all n × n matrices whose trace is zero: W = {A ∈ Mn×n | tr( A) = 0 }. Is W a subspace of V? Solution. If 0 is the n × n zero matrix then clearly tr( 0) = 0, and thus 0 ∈ Mn×n. Suppose that A and B are in W. Then necessarily tr( A) = 0 and tr( B) = 0. Consider the matrix C = A + B. Then tr( C) = tr( A + B) = ( a11 + b11 ) + ( a22 + b22 ) + · · · + ( ann + bnn )= ( a11 + · · · + ann ) + ( b11 + · · · + bnn )= tr( A) + tr( B)= 0 > 113 Vector Spaces Therefore, tr( C) = 0 and consequently C = A + B ∈ W, in other words, W is closed under addition. Now let α be a scalar and let C = αA. Then tr( C) = tr( αA) = ( αa 11 ) + ( αa 22 ) + · · · + ( αa nn ) = α tr( A) = 0 . Thus, tr( C) = 0, that is, C = αA ∈ W, and consequently W is closed under scalar multipli-cation. Therefore, the set W is a subspace of V. Example 14.13. Let V = Pn[t] and consider the subset W of V: W = {u ∈ Pn[t] | u′(1) = 0
{ "page_id": null, "source": 6828, "title": "from dpo" }
} In other words, W consists of polynomials of degree n in the variable t whose derivative at t = 1 is zero. Is W a subspace of V? Solution. The zero polynomial 0(t) = 0 clearly has derivative at t = 1 equal to zero, that is, 0′(1) = 0, and thus the zero polynomial is in W. Now suppose that u(t) and v(t) are two polynomials in W. Then, u′(1) = 0 and also v′(1) = 0. To verify whether or not W is closed under addition, we must determine whether the sum polynomial ( u + v)( t) has a derivative at t = 1 equal to zero. From the rules of differentiation, we compute (u + v)′(1) = u′(1) + v′(1) = 0 + 0 . Therefore, the polynomial ( u + v) is in W, and thus W is closed under addition. Now let α be any scalar and let u(t) be a polynomial in W. Then u′(1) = 0. To determine whether or not the scalar multiple αu(t) is in W we must determine if αu(t) has a derivative of zero at t = 1. Using the rules of differentiation, we compute that (αu)′(1) = αu′(1) = α · 0 = 0 . Therefore, the polynomial ( αu)( t) is in W and thus W is closed under scalar multiplication. All three properties of a subspace hold for W and therefore W is a subspace of Pn[t]. Example 14.14. Let V = Pn[t] and consider the subset W of V: W = {u ∈ Pn[t] | u(2) = −1} In other words, W consists of polynomials of degree n in the variable t whose value t = 2 is −1. Is W a subspace of V? Solution. The zero polynomial 0(t) = 0 clearly
{ "page_id": null, "source": 6828, "title": "from dpo" }
does not equal −1 at t = 2. Therefore, W does not contain the zero polynomial and, because all three conditions of a subspace must be satisfied for W to be a subspace, then W is not a subspace of Pn[t]. As an exercise, you may want to investigate whether or not W is closed under addition and scalar multiplication. > 114 Lecture 14 Example 14.15. A square matrix A is said to be symmetric if AT = A. For example, here is a 3 × 3 symmetric matrix: A =  1 2 −32 4 5 −3 5 7  Verify for yourself that we do indeed have that AT = A. Let W be the set of all symmetric n × n matrices. Is W a subspace of V = Mn×n? Example 14.16. For any vector space V, there are two trivial subspaces in V, namely, V itself is a subspace of V and the set consisting of the zero vector W = {0} is a subspace of V.There is one particular way to generate a subspace of any given vector space V using the span of a set of vectors. Recall that we defined the span of a set of vectors in Rn but we can define the same notion on a general vector space V. Definition 14.17: Let V be a vector space and let v1, v2, . . . , vp be vectors in V. The span of {v1, . . . , vp} is the set of all linear combinations of v1, . . . , vp:span {v1, v2, . . . , vp} = { t1v1 + t2v2 + · · · + vpvp | t1, t 2, . . . , t p ∈ R } . We now show that the
{ "page_id": null, "source": 6828, "title": "from dpo" }
span of a set of vectors in V is a subspace of V. Theorem 14.18: If v1, v2, . . . , vp are vectors in V then span {v1, . . . , vp} is a subspace of V. Solution. Let u = t1v1+· · · +tpvp and w = s1v1+· · · +spvp be two vectors in span {v1, v2, . . . , vp}.Then u + w = ( t1v1 + · · · + tpvp) + ( s1v1 + · · · + spvp) = ( t1 + s1)v1 + · · · + ( tp + sp)vp. Therefore u + w is also in the span of v1, . . . , vp. Now consider αu: αu = α(t1v1 + · · · + tpvp) = ( αt 1)v1 + · · · + ( αt p)vp. Therefore, αu is in the span of v1, . . . , vp. Lastly, since 0 v1 + 0 v2 + · · · + 0 vp = 0 then the zero vector 0 is in the span of v1, v2, . . . , vp. Therefore, span {v1, v2, . . . , vp} is a subspace of V. Given a general subspace W of V, if w1, w2, . . . , wp are vectors in W such that span {w1, w2, . . . , wp} = W then we say that {w1, w2, . . . , wp} is a spanning set of W. Hence, every vector in W can be written as a linear combination of the vectors w1, w2, . . . , wp. After this lecture you should know the following: > 115 Vector Spaces • what a vector space/subspace is • be able to give some examples of vector spaces/subspaces
{ "page_id": null, "source": 6828, "title": "from dpo" }
• that the span of a set of vectors in V is a subspace of V > 116 Lecture 15 # Lecture 15 Linear Maps Before we begin this Lecture, we review subspaces. Recall that W is a subspace of a vector space V if W is a subset of V and 1. the zero vector 0 in V is also in W,2. for any vectors u, v in W the sum u + v is also in W, and 3. for any vector u in W and any scalar α the vector αu is also in W.In the previous lecture we gave several examples of subspaces. For example, we showed that a line through the origin in R2 is a subspace of R2 and we gave examples of subspaces of Pn[t] and Mn×m. We also showed that if v1, . . . , vp are vectors in a vector space V then W = span {v1, v2, . . . , vp} is a subspace of V. # 15.1 Linear Maps on Vector Spaces In Lecture 7, we defined what it meant for a vector mapping T : Rn → Rm to be a linear mapping . We now want to introduce linear mappings on general vector spaces; you will notice that the definition is essentially the same but the key point to remember is that the underlying spaces are not Rn but a general vector space. Definition 15.1: Let T : V → U be a mapping of vector spaces. Then T is called a linear mapping if • for any u, v in V it holds that T(u + v) = T(u) + T(v), and • for any scalar α and u in V is holds that T(αv) = αT(v). Example 15.2. Let V = Mn×n be
{ "page_id": null, "source": 6828, "title": "from dpo" }
the vector space of n × n matrices and let T : V → V be the mapping T(A) = A + AT . > 117 Linear Maps Is T is a linear mapping? Solution. Let A and B be matrices in V. Then using the properties of the transpose and regrouping we obtain: T(A + B) = ( A + B) + ( A + B)T = A + B + AT + BT = ( A + AT ) + ( B + BT )= T(A) + T(B). Similarly, if α is any scalar then T(αA) = ( αA) + ( αA)T = αA + αAT = α(A + AT )= αT(A). This proves that T satisfies both conditions of Definition 15.1 and thus T is a linear mapping. Example 15.3. Let V = Mn×n be the vector space of n × n matrices, where n ≥ 2, and let T : V → R be the mapping T(A) = det( A)Is T is a linear mapping? Solution. If T is a linear mapping then according to Definition 15.1 , we must have T(A + B) = det( A + B) = det( A) + det( B) and also T(αA) = αT(A) for any scalar α. Do these properties actually hold though? For example, we know from the properties of the determinant that det( αA) = αn det( A) and therefore it does not hold that T(αA) = αT(A)unless α = 1. Therefore, T is not a linear mapping. Also, it does not hold in general that det( A + B) = det( A) + det( B); in fact it rarely holds. For example, if A = [2 00 1 ] , B = [−1 10 3 ] then det( A) = 2, det( B) = −3 and
{ "page_id": null, "source": 6828, "title": "from dpo" }
therefore det( A) + det( B) = −1. On the other hand, A + B = [1 10 4 ] and thus det( A + B) = 4. Thus, det( A + B) 6 = det( A) + det( B). Example 15.4. Let V = Pn[t] be the vector space of polynomials in the variable t of degree no more than n ≥ 1. Consider the mapping T : V → V define as T(f (t)) = 2 f (t) + f ′(t). > 118 Lecture 15 For example, if f (t) = 3 t6 − t2 + 5 then T(f (t)) = 2 f (t) + f ′(t)= 2(3 t5 − t2 + 5) + (18 t5 − 2t)= 6 t5 + 18 t5 − 2t2 − 2t + 10 . Is T is a linear mapping? Solution. Let f (t) and g(t) be polynomials of degree no more than n ≥ 1. Then T(f (t) + g(t)) = 2( f (t) + g(t)) + ( f (t) + g(t)) ′ = 2 f (t) + 2 g(t) + f ′(t) + g′(t)= (2 f (t) + f ′(t)) + (2 g(t) + g′(t)) = T(f (t)) + T(g(t)) . Therefore, T(f (t) + g(t)) = T(f (t)) + T(g(t)). Now let α be any scalar. Then T(αf (t)) = 2( αf (t)) + ( αf (t)) ′ = 2 αf (t) + αf ′(t)= α(2 f (t) + f ′(t)) = αT(f (t)) . Therefore, T(αf (t)) = αT(f (t)). Therefore, T is a linear mapping. We now introduce two important subsets associated to a linear mapping. Definition 15.5: Let T : V → U be a linear mapping. 1. The kernel of T is the set of vectors v in the domain V that get mapped to
{ "page_id": null, "source": 6828, "title": "from dpo" }
the zero vector, that is, T(v) = 0. We denote the kernel of T by ker( T): ker( T) = {v ∈ V | T(v) = 0}. 2. The range of T is the set of vectors b in the codomain U for which there exists at least one v in V such that T(v) = b. We denote the range of T by Range( T): Range( T) = {b ∈ U | there exists some v ∈ U such that T(v) = b}. You may have noticed that the definition of the range of a linear mapping on an abstract vector space is the usual definition of the range of a function. Not surprisingly, the kernel and range are subspaces of the domain and codomain, respectively. > 119 Linear Maps Theorem 15.6: Let T : V → U be a linear mapping. Then ker( T) is a subspace of V and Range( T) is a subspace of U. Proof. Suppose that v and u are in ker( T). Then T(v) = 0 and T(u) = 0. Then by linearity of T it holds that T(v + u) = T(v) + T(u) = 0 + 0 = 0. Therefore, since T(u + v) = 0 then u + v is in ker( T). This shows that ker( T) is closed under addition. Now suppose that α is any scalar and v is in ker( T). Then T(v) = 0 and thus by linearity of T it holds that T(αv) = αT(v) = α0 = 0. Therefore, since T(αv) = 0 then αv is in ker( T) and this proves that ker( T) is closed under scalar multiplication. Lastly, by linearity of T it holds that T(0) = T(v − v) = T(v) − T(v) = 0 that is, T(0) =
{ "page_id": null, "source": 6828, "title": "from dpo" }
0. Therefore, the zero vector 0 is in ker( T). This proves that ker( T) is a subspace of V. The proof that Range( T) is a subspace of U is left as an exercise. Example 15.7. Let V = Mn×n be the vector space of n × n matrices and let T : V → V be the mapping T(A) = A + AT . Describe the kernel of T. Solution. A matrix A is in the kernel of T if T(A) = A + AT = 0, that is, if AT = −A.Hence, ker( A) = {A ∈ Mn×n | AT = −A}. What type of matrix A satisfies AT = −A? For example, consider the case that A is the 2 × 2 matrix A = [a11 a12 a21 a22 ] and AT = −A. Then [a11 a21 a12 a22 ] = [−a11 −a12 −a21 −a22 ] . Therefore, it must hold that a11 = −a11 , a21 = −a12 and a22 = −a22 . Then necessarily a11 = 0 and a22 = 0 and a12 can be arbitrary. For example, the matrix A = [ 0 7 −7 0 ] satisfies AT = −A. Using a similar computation as above, a 3 × 3 matrix satisfies AT = −A if A is of the form A =  0 a b −a 0 c −b −c 0  > 120 Lecture 15 where a, b, c are arbitrary constants. In general, a matrix A that satisfies AT = −A is called skew-symmetric . Example 15.8. Let V be the vector space of differentiable functions on the interval [ a, b ]. That is, f is an element of V if f : [ a, b ] → R is differentiable. Describe the kernel of the linear
{ "page_id": null, "source": 6828, "title": "from dpo" }
mapping T : V → V defined as T(f (x)) = f (x) + f ′(x). Solution. A function f is in the kernel of T if T(f (x)) = 0, that is, if f (x) + f ′(x) = 0. Equivalently, if f ′(x) = −f (x). What functions f do you know of satisfy f ′(x) = −f (x)? How about f (x) = e−x? It is clear that f ′(x) = −e−x = −f (x) and thus f (x) = e−x is in ker( T). How about g(x) = 2 e−x? We compute that g′(x) = −2e−x = −g(x) and thus g is also in ker( T). It turns out that the elements of ker( T) are of the form f (x) = Ce −x for a constant C. # 15.2 Null space and Column space In the previous section, we introduced the kernel and range of a general linear mapping T : V → U. In this section, we consider the particular case of matrix mappings TA : Rn → Rm for some m×n matrix A. In this case, v is in the kernel of TA if and only if TA(v) = Av = 0.In other words, v ∈ ker( TA) if and only if v is a solution to the homogeneous system Ax = 0.Because the case when T is a matrix mapping arises so frequently, we give a name to the set of vectors v such that Av = 0. Definition 15.9: The null space of a matrix A ∈ Mm×n, denoted by Null( A), is the subset of Rn consisting of vectors v such that Av = 0. In other words, v ∈ Null( A) if and only if Av = 0. Using set notation: Null( A) = {v ∈ Rn | Av
{ "page_id": null, "source": 6828, "title": "from dpo" }
= 0}. Hence, the following holds ker( TA) = Null( A). Because the kernel of a linear mapping is a subspace we obtain the following. Theorem 15.10: If A ∈ Mm×n then Null( A) is a subspace of Rn.Hence, by Theorem 15.10 , if u and v are two solutions to the linear system Ax = 0 then αu + βv is also a solution: A(αu + βv) = αAu + βAv = α · 0 + β · 0 = 0. > 121 Linear Maps Example 15.11. Let V = R4 and consider the following subset of V: W = {(x1, x 2, x 3, x 4) ∈ R4 | 2x1 − 3x2 + x3 − 7x4 = 0 }. Is W a subspace of V? Solution. The set W is the null space of the matrix 1 × 4 matrix A given by A = [2 −3 1 −7] . Hence, W = Null( A) and consequently W is a subspace. From our previous remarks, the null space of a matrix A ∈ Mm×n is just the solution set of the homogeneous system Ax = 0. Therefore, one way to explicitly describe the null space of A is to solve the system Ax = 0 and write the general solution in parametric vector form. From our previous work on solving linear systems, if the rref (A) has r leading 1’s then the number of parameters in the solution set is d = n − r. Therefore, after performing back substitution, we will obtain vectors v1, . . . , vd such that the general solution in parametric vector form can be written as x = t1v1 + t2v2 + · · · + tdvd where t1, t 2, . . . , t d are arbitrary numbers. Therefore,
{ "page_id": null, "source": 6828, "title": "from dpo" }
Null( A) = span {v1, v2, . . . , vd}. Hence, the vectors v1, v2, . . . , vn form a spanning set for Null( A). Example 15.12. Find a spanning set for the null space of the matrix A =  −3 6 −1 1 −71 −2 2 3 −12 −4 5 8 −4  . Solution. The null space of A is the solution set of the homogeneous system Ax = 0.Performing elementary row operations one obtains A ∼  1 −2 0 −1 30 0 1 2 −20 0 0 0 0  . Clearly r = rank( A) and since n = 5 we will have d = 3 vectors in a spanning set for Null( A). Letting x5 = t1, and x4 = t2, then from the 2nd row we obtain x3 = −2t2 + 2 t1. Letting x2 = t3, then from the 1st row we obtain x1 = 2 t3 + t2 − 3t1. > 122 Lecture 15 Writing the general solution in parametric vector form we obtain x = t1  −30201  + t2  10 −210  + t3  21000  Therefore, Null( A) = span  −30201 ︸ ︷︷ ︸ > v1 ,  10 −210 ︸ ︷︷ ︸ > v2  21000 ︸︷︷ ︸ > v3  You can verify that Av 1 = Av 2 = Av 3 = 0. Now we consider the range of a matrix mapping TA : Rn → Rm. Recall that a vector b in the co-domain Rm is in the range of TA if there exists some vector x in the domain Rn such that TA(x) = b. Since, TA(x) = Ax then Ax = b. Now, if A has columns A = [v1 v2 · · ·
{ "page_id": null, "source": 6828, "title": "from dpo" }
vn ] and x = ( x1, x 2, . . . , x n) then recall that Ax = x1v1 + x2v2 + · · · + xnvn and thus Ax = x1v1 + x2v2 + · · · + xnvn = b. Thus, a vector b is in the range of A if it can be written as a linear combination of the columns v1, v2, . . . , vn of A. This motivates the following definition. Definition 15.13: Let A ∈ Mm×n be a matrix. The span of the columns of A is called the column space of A. The column space of A is denoted by Col( A). Explicitly, if A = [v1 v2 · · · vn ] then Col( A) = span {v1, v2, . . . , vn}. In summary, we can write that Range( TA) = Col( A). and since Range( TA) is a subspace of Rm then so is Col( A). Theorem 15.14: The column space of a m × n matrix is a subspace of Rm. > 123 Linear Maps Example 15.15. Let A =  2 4 −2 1 −2 −5 7 33 7 −8 6  , b =  3 −13  . Is b in the column space Col( A)? Solution. The vector b is in the column space of A if there exists x ∈ R4 such that Ax = b.Hence, we must determine if Ax = b has a solution. Performing elementary row operations on the augmented matrix [A b] we obtain [A b] ∼  2 4 −2 1 30 1 −5 −4 −20 0 0 17 1  The system is consistent and therefore Ax = b will have a solution. Therefore, b is in Col( A). After this lecture you
{ "page_id": null, "source": 6828, "title": "from dpo" }
should know the following: • what the null space of a matrix is and how to compute it • what the column space of a matrix is and how to determine if a given vector is in the column space • what the range and kernel of a linear mapping is > 124 # Lecture 16 Linear Independence, Bases, and Dimension # 16.1 Linear Independence Roughly speaking, the concept of linear independence evolves around the idea of working with “efficient” spanning sets for a subspace. For instance, the set of directions {EAST , NORTH , NORTH-EAST } are redundant since a total displacement in the NORTH-EAST direction can be obtained by combining individual NORTH and EAST displacements. With these vague statements out of the way, we introduce the formal definition of what it means for a set of vectors to be “efficient”. Definition 16.1: Let V be a vector space and let {v1, v2, . . . , vp} be a set of vectors in V. Then {v1, v2, . . . , vp} is linearly independent if the only scalars c1, c 2, . . . , c p that satisfy the equation c1v1 + c2v2 + · · · + cpvp = 0 are the trivial scalars c1 = c2 = · · · = cp = 0. If the set {v1, . . . , vp} is not linearly independent then we say that it is linearly dependent .We now describe the redundancy in a set of linear dependent vectors. If {v1, . . . , vp} are linearly dependent, it follows that there are scalars c1, c 2, . . . , c p, at least one of which is nonzero , such that c1v1 + c2v2 + · · · + cpvp = 0. (⋆)For
{ "page_id": null, "source": 6828, "title": "from dpo" }
example, suppose that {v1, v2, v3, v4} are linearly dependent. Then there are scalars c1, c 2, c 3, c 4, not all of them zero, such that equation ( ⋆) holds. Suppose, for the sake of argument, that c3 6 = 0. Then, v3 = −c1 c3 v1 − c2 c3 v2 − c4 c3 v4.Linear Independence, Bases, and Dimension Therefore, when a set of vectors is linearly dependent, it is possible to write one of the vec-tors as a linear combination of the others. It is in this sense that a set of linearly dependent vectors are redundant. In fact, if a set of vectors are linearly dependent we can say even more as the following theorem states. Theorem 16.2: A set of vectors {v1, v2, . . . , vp}, with v1 6 = 0, is linearly dependent if and only if some vj is a linear combination of the preceding vectors v1, . . . , vj−1. Example 16.3. Show that the following set of 2 × 2 matrices is linearly dependent: { A1 = [1 20 −1 ] , A2 = [−1 31 0 ] , A3 = [ 5 0 −2 −3 ]} . Solution. It is clear that A1 and A2 are linearly independent, i.e., A1 cannot be written as a scalar multiple of A2, and vice-versa. Since the (2 , 1) entry of A1 is zero, the only way to get the −2 in the (2 , 1) entry of A3 is to multiply A2 by −2. Similary, since the (2 , 2) entry of A2 is zero, the only way to get the −3 in the (2 , 2) entry of A3 is to multiply A1 by 3. Hence, we suspect that 3 A1 − 2A2 = A3. Verify: 3A1 −
{ "page_id": null, "source": 6828, "title": "from dpo" }
2A2 = [3 60 −3 ] − [−2 62 0 ] = [ 5 0 −2 −3 ] = A3 Therefore, 3 A1 − 2A2 − A3 = 0 and thus we have found scalars c1, c 2, c 3 not all zero such that c1A1 + c2A2 + c3A3 = 0. # 16.2 Bases We now introduce the important concept of a basis. Given a set of vectors {v1, . . . , vp−1, vp} in V, we showed that W = span {v1, v2, . . . , vp} is a subspace of V. If say vp is linearly dependent on v1, v2, . . . , vp−1 then we can remove vp and the smaller set {v1, . . . , vp−1} still spans all of W: W = span {v1, v2, . . . , vp−1, vp} = span {v1, . . . , vp−1}. Intuitively, vp does not provide an independent “direction” in generating W. If some other vector vj is linearly dependent on v1, . . . , vp−1 then we can remove vj and the resulting smaller set of vectors still spans W. We can continue removing vectors until we obtain a minimal set of vectors that are linearly independent and still span W. The following remarks motivate the following important definition. Definition 16.4: Let W be a subspace of a vector space V. A set of vectors B = {v1, . . . , vk} in W is said to be a basis for W if (a) the set B spans all of W, that is, W = span {v1, . . . , vk}, and > 126 Lecture 16 (b) the set B is linearly independent. A basis is therefore a minimal spanning set for a subspace. Indeed, if B =
{ "page_id": null, "source": 6828, "title": "from dpo" }
{v1, . . . , vp} is a basis for W and we remove say vp, then ˜B = {v1, . . . , vp−1} cannot be a basis for W.Why? If B = {v1, . . . , vp} is a basis then it is linearly independent and therefore vp cannot be written as a linear combination of the others. In other words, vp ∈ W is not in the span of ˜B = {v1, . . . , vp−1} and therefore ˜B is not a basis for W because a basis must be a spanning set. If, on the other hand, we start with a basis B = {v1, . . . , vp} for W and we add a new vector u from W then ˜B = {v1, . . . , vp, u} is not a basis for W. Why? We still have that span ˜B = W but now ˜B is not linearly independent. Indeed, because B = {v1, . . . , vp} is a basis for W, the vector u can be written as a linear combination of {v1, . . . , vp}, and thus ˜B is not linearly independent. Example 16.5. Show that the standard unit vectors form a basis for V = R3: e1 =  100  , e2 =  010  , e3 =  001  Solution. Any vector x ∈ R3 can be written as a linear combination of e1, e2, e3: x =  x1 x2 x3  = x1  100  + x2  010  + x3  001  = x1e1 + x2e2 + x3e3 Therefore, span {e1, e2, e3} = R3. The set B = {e1, e2, e3} is linearly independent. Indeed, if there are scalars c1, c 2,
{ "page_id": null, "source": 6828, "title": "from dpo" }
c 3 such that c1e1 + c2e2 + c3e3 = 0 then clearly they must all be zero, c1 = c2 = c3 = 0. Therefore, by definition, B = {e1, e2, e3} is a basis for R3. This basis is called the standard basis for R3. Analogous arguments hold for {e1, e2, . . . , en} in Rn. Example 16.6. Is B = {v1, v2, v3} a basis for R3? v1 =  20 −4  , v2 =  −4 −28  , v3 =  4 −6 −6  Solution. Form the matrix A = [ v1 v2 v3] and row reduce: A ∼  1 0 00 1 00 0 1  > 127 Linear Independence, Bases, and Dimension Therefore, the only solution to Ax = 0 is the trivial solution. Therefore, B is linearly inde-pendent. Moreover, for any b ∈ R3, the augmented matrix [A b] is consistent. Therefore, the columns of A span all of R3:Col( A) = span {v1, v2, v3} = R3. Therefore, B is a basis for R3. Example 16.7. In V = R4, consider the vectors v1 =  130 −2  , v2 =  2 −1 −21  , v3 =  −142 −3  . Let W = span {v1, v2, v3}. Is B = {v1, v2, v3} a basis for W? Solution. By definition, B is a spanning set for W, so we need only determine if B is linearly independent. Form the matrix, A = [ v1 v2 v3] and row reduce to obtain A ∼  1 0 10 1 −10 0 00 0 0  Hence, rank( A) = 2 and thus B is linearly dependent. Notice v1 − v2 = v3. Therefore, B is not a basis of W. Example 16.8.
{ "page_id": null, "source": 6828, "title": "from dpo" }
Find a basis for the vector space of 2 × 2 matrices. Example 16.9. Recall that a n × n is skew-symmetric A if AT = −A. We proved that the set of n × n matrices is a subspace. Find a basis for the set of 3 × 3 skew-symmetric matrices. # 16.3 Dimension of a Vector Space The following theorem will lead to the definition of the dimension of a vector space. Theorem 16.10: Let V be a vector space. Then all bases of V have the same number of vectors. Proof: We will prove the theorem for the case that V = Rn. We already know that the standard unit vectors {e1, e2, . . . , en} is a basis of Rn. Let {u1, u2, . . . , up} be nonzero vec-tors in Rn and suppose first that p > n . In Lecture 6, Theorem 6.7 , we proved that any set of vectors in Rn containing more than n vectors is automatically linearly dependent. The reason is that the RREF of A = [u1 u2 · · · up ] will contain at most r = n leading ones, > 128 Lecture 16 and therefore d = p − n > 0. Therefore, the solution set of Ax = 0 contains non-trivial solutions. On the other hand, suppose instead that p < n . In Lecture 4, Theorem 4.11 , we proved that a set of vectors {u1, . . . , up} in Rn spans Rn if and only if the RREF of A has exactly r = n leading ones. The largest possible value of r is r = p < n . Therefore, if p < n then {u1, u2, . . . , up} cannot be a basis for
{ "page_id": null, "source": 6828, "title": "from dpo" }
Rn. Thus, in either case ( p > n or p < n ), the set {u1, u2, . . . , up} cannot be a basis for Rn. Hence, any basis in Rn must contain n vectors.  The previous theorem does not say that every set {v1, v2, . . . , vn} of nonzero vectors in Rn containing n vectors is automatically a basis for Rn. For example, v1 =  100  , v2 =  010  , v3 =  230  do not form a basis for R3 because x =  001  is not in the span of {v1, v2, v3}. All that we can say is that a set of vectors in Rn containing fewer or more than n vectors is automatically not a basis for Rn. From Theorem 16.10 , any basis in Rn must have exactly n vectors. In fact, on a general abstract vector space V, if {v1, v2, . . . , vn} is a basis for V then any other basis for V must have exactly n vectors also. Because of this result, we can make the following definition. Definition 16.11: Let V be a vector space. The dimension of V, denoted dim V, is the number of vectors in any basis of V. The dimension of the trivial vector space V = {0} is defined to be zero. There is one subtle issue we are sweeping under the rug: Does every vector space have a basis? The answer is yes but we will not prove this result here. Moving on, suppose that we have a set B = {v1, v2, . . . , vn} in Rn containing exactly n vectors. For B = {v1, v2, . . . , vn} to be a
{ "page_id": null, "source": 6828, "title": "from dpo" }
basis of Rn, the set B must be linearly independent and span B = Rn. In fact, it can be shown that if B is linearly independent then the spanning condition span B = Rn is automatically satisfied, and vice-versa. For example, say the vec-tors {v1, v2, . . . , vn} in Rn are linearly independent, and put A = [ v1 v2 · · · vn]. Then A−1 exists and therefore Ax = b is always solvable. Hence, Col( A) = span {v1, v2, . . . , vn} = Rn.In summary, we have the following theorem. Theorem 16.12: Let B = {v1, . . . , vn} be vectors in Rn. If B is linearly independent then B is a basis for Rn. Or if span {v1, v2, . . . , vn} = Rn then B is a basis for Rn. > 129 Linear Independence, Bases, and Dimension Example 16.13. Do the columns of the matrix A form a basis for R4? A =  2 3 3 −24 7 8 −60 0 1 0 −4 −6 −6 3  Solution. Let v1, v2, v3, v4 denote the columns of A. Since we have n = 4 vectors in Rn, we need only check that they are linearly independent. Compute det A = −2 6 = 0 Hence, rank( A) = 4 and thus the columns of A are linearly independent. Therefore, the vectors v1, v2, v3, v4 form a basis for R4. A subspace W of a vector space V is a vector space in its own right, and therefore also has dimension. By definition, if B = {v1, . . . , vk} is a linearly independent set in W and span {v1, . . . , vk} = W, then B is a
{ "page_id": null, "source": 6828, "title": "from dpo" }
basis for W and in this case the dimension of W is k.Since an n-dimensional vector space V requires exactly n vectors in any basis, then if W is a strict subspace of V then dim W 130 Lecture 16 The general solution to Ax = 0 in parametric form is x = t  −5 −3/201  + s  −6 −5/210  = tv1 + sv2 By construction, the vectors v1 =  −5 −3/201  , v2 =  −6 −5/210  span the null space ( A) and they are linearly independent. Therefore, B = {v1, v2} is a basis for Null( A) and therefore dim Null( A) = 2. In general, the dimension of the Null( A)is the number of free parameters in the solution set of the system Ax = 0, that is, dim Null( A)
{ "page_id": null, "source": 6828, "title": "from dpo" }
= d = n − rank( A) Example 16.15. Find a basis for Col( A) and the dim Col( A) if A =  1 2 3 −4 81 2 0 2 82 4 −3 10 93 6 0 6 9  . Solution. By definition, the column space of A is the span of the columns of A, which we denote by A = [ v1 v2 v3 v4 v5]. Thus, to find a basis for Col( A), by trial and error we could determine the largest subset of the columns of A that are linearly independent. For example, first we determine if {v1, v2} is linearly independent. If yes, then add v3 and determine if {v1, v2, v3} is linearly independent. If {v1, v2} is not linearly independent then discard v2 and determine if {v1, v3} is linearly independent. We continue this process until we have determined the largest subset of the columns of A that is linearly independent, and this will yield a basis for Col( A). Instead, we can use the fact that matrices that are row equivalent induce the same solution set for the associated homogeneous system. Hence, let B be the RREF of A: B = rref( A) =  1 2 0 2 00 0 1 −2 00 0 0 0 10 0 0 0 0  > 131 Linear Independence, Bases, and Dimension By inspection, the columns b1, b3, b5 of B are linearly independent. It is easy to see that b2 = 2 b1 and b4 = 2 b1 − 2b3. These same linear relations hold for the columns of A: A =  1 2 3 −4 81 2 0 2 82 4 −3 10 93 6 0 6 9  By inspection, v2 = 2 v1 and v4 =
{ "page_id": null, "source": 6828, "title": "from dpo" }
2 v1 − 2v3. Thus, because b1, b3, b5 are linearly inde-pendent columns of B =rref (A), then v1, v3, v5 are linearly independent columns of A.Therefore, we have Col( A) = span {v1, v3, v5} = span  1123  ,  30 −30  ,  8899  and consequently dim Col( A) = 3. This procedure works in general: To find a basis for the Col( A), row reduce A ∼ B until you can determine which columns of B are linearly independent. The columns of A in the same position as the linearly independent columns of B form a basis for the Col( A). WARNING: Do not take the linearly independent columns of B as a basis for Col( A). Always go back to the original matrix A to select the columns. After this lecture you should know the following: • what it means for a set to be linearly independent/dependents • what a basis is (a spanning set that is linearly independent) • what is the meaning of the dimension of a vector space • how to determine if a given set in Rn is linearly independent • how to find a basis for the null space and column space of a matrix A > 132 # Lecture 17 The Rank Theorem # 17.1 The Rank of a Matrix We now give the definition to the rank of a matrix. Definition 17.1: The rank of a matrix A is the dimension of its column space. We will use rank( A) to denote the rank of A.Recall that Col( A) = Range( TA), and thus the rank of A is the dimension of the range of the linear mapping TA. The range of a mapping is sometimes called the image .We now define the nullity of
{ "page_id": null, "source": 6828, "title": "from dpo" }
a matrix. Definition 17.2: The nullity of a matrix A is the dimension of its nullspace Null( A). We will use nullity( A) to denote the nullity of A.Recall that ( A) = ker( TA), and thus the nullity of A is the dimension of the kernel of the linear mapping TA.The rank and nullity of a matrix are connected via the following fundamental theorem known as the Rank Theorem. Theorem 17.3: (Rank Theorem) Let A be a m × n matrix. The rank of A is the number of leading 1’s in its RREF. Moreover, the following equation holds: n = rank( A) + nullity( A). Proof. A basis for the column space is obtained by computing rref (A) and identifying the columns that contain a leading 1. Each column of A corresponding to a column of rref (A)with a leading 1 is a basis vector for the column space of A. Therefore, if r is the number of leading 1’s then r = rank( A). Now let d = n − r. The number of free parameters in the The Rank Theorem solution set of Ax = 0 is d and therefore a basis for Null( A) will contain d vectors, that is, nullity( A) = d. Therefore, nullity( A) = n − rank( A). Example 17.4. Find the rank and nullity of the matrix A =  1 −2 2 3 −60 −1 −3 1 1 −2 4 −3 −6 11  . Solution. Row reduce far enough to identify where the leading entries are: A 2R1+R2 −−−−→  1 −2 2 3 −60 −1 −3 1 10 0 1 0 −1  There are r = 3 leading entries and therefore rank( A) = 3. The nullity is therefore nullity( A) = 5 − rank( A)
{ "page_id": null, "source": 6828, "title": "from dpo" }
= 2. Example 17.5. Find the rank and nullity of the matrix A =  1 −3 −1 −1 4 2 −1 3 0  . Solution. Row reduce far enough to identify where the leading entries are: A R1+R2,R 1+R3 −−−−−−−−→  1 −3 −10 1 10 0 −1  There are r = 3 leading entries and therefore rank( A) = 3. The nullity is therefore nullity( A) = 3 − rank( A) = 0. Another way to see that nullity( A) = 0 is as follows. From the above computation, A is invertible. Therefore, there is only one vector in Null( A) = {0}.The subspace {0} has dimension zero. Using the rank and nullity of a matrix, we now provide further characterizations of invertible matrices. Theorem 17.6: Let A be a n × n matrix. The following statements are equivalent: (i) The columns of A form a basis for Rn.(ii) Col( A) = Rn (iii) rank( A) = n (iv) Null( A) = {0} > 134 Lecture 17 (v) nullity( A) = 0 (vi) A is an invertible matrix. After this lecture you should know the following: • what the rank of a matrix is and how to compute it • what the nullity of a matrix is and how to compute it • the Rank Theorem > 135 The Rank Theorem > 136 # Lecture 18 Coordinate Systems # 18.1 Coordinates Recall that a basis of a vector space V is a set of vectors B = {v1, v2, . . . , vn} in V such that 1. the set B spans all of V, that is, V = span( B), and 2. the set B is linearly independent. Hence, if B is a basis for V, each vector x∗ ∈ V can be written
{ "page_id": null, "source": 6828, "title": "from dpo" }
as a linear combination of B: x∗ = c1v1 + c2v2 + · · · + cnvn. Moreover, from the definition of linear independence given in Definition 6.1 , any vector x ∈ span( B) can be written in only one way as a linear combination of v1, . . . , vn. In other words, for the x∗ above, there does not exist other scalars t1, . . . , t n such that also x∗ = t1v1 + t2v2 + · · · + tnvn. To see this, suppose that we can write x∗ in two different ways using B: x∗ = c1v1 + c2v2 + · · · + cnvn x∗ = t1v1 + t2v2 + · · · + tnvn. Then 0 = x∗ − x∗ = ( c1 − t1)v1 + ( c2 − t2)v2 + · · · + ( cn − tn)vn. Since B = {v1, . . . , vn} is linearly independent, the only linear combination of v1, . . . , vn that gives the zero vector 0 is the trivial linear combination. Therefore, it must be the case that ci − ti = 0, or equivalently that ci = ti for all i = 1 , 2 . . . , n . Thus, there is only one way to write x∗ in terms of B = {v1, . . . , vn}. Hence, relative to the basis B = {v1, v2, . . . , vn},the scalars c1, c 2, . . . , c n uniquely determine the vector x, and vice-versa. Our preceding discussion on the unique representation property of vectors in a given basis leads to the following definition. Coordinate Systems Definition 18.1: Let B = {v1, . . . , vn} be a basis
{ "page_id": null, "source": 6828, "title": "from dpo" }
for V and let x ∈ V. The coordinates of x relative to the basis B are the unique scalars c1, c 2, . . . , c n such that x = c1v1 + c2v2 + · · · + cnvn. In vector notation, the B-coordinates of x will be denoted by [x]B =  c1 c2 ... cn  and we will call [ x]B the coordinate vector of x relative to B.The notation [ x]B indicates that these are coordinates of x with respect to the basis B.If it is clear what basis we are working with, we will omit the subscript B and simply write [x] for the coordinates of x relative to B. Example 18.2. One can verify that B = {[ 11 ] , [−11 ]} is a basis for R2. Find the coordinates of v = [31 ] relative to B. Solution. Let v1 = (1 , 1) and let v2 = ( −1, 1). By definition, the coordinates of v with respect to B are the scalars c1, c 2 such that v = c1v1 + c2v2 = [1 −11 1 ] [ c1 c2 ] If we put P = [ v1 v2], and let [ v]B = ( c1, c 2), then we need to solve the linear system v = P[v]B Solving the linear system, one finds that the solution is [ v]B = (2 , −1), and therefore this is the B-coordinate vector of v, or the coordinates of v, relative to B. It is clear how the procedure of the previous example can be generalized. Let B = {v1, v2, . . . , vn} be a basis for Rn and let v be any vector in Rn. Put P = [v1 v2 · · · vn
{ "page_id": null, "source": 6828, "title": "from dpo" }
].Then the B-coordinates of v is the unique column vector [ v]B solving the linear system Px = v > 138 Lecture 18 that is, x = [ v]B is the unique solution to Px = v. Because v1, v2, . . . , vn are linearly independent, the solution to Px = v is [v]B = P−1v. We remark that if an inconsistent row arises when you row reduce the augmented matrix [P v] then you have made an error in your row reduction algorithm. In summary, to find coordinates with respect to a basis B in Rn, we need to solve a square linear system. Example 18.3. Let v1 =  362  , v2 =  −101  , x =  312 7  and let B = {v1, v2}. One can show that B is linearly independent and therefore a basis for W = span {v1, v2}. Determine if x is in W, and if so, find the coordinate vector of x relative to B. Solution. By definition, x is in W = span {v1, v2} if we can write x as a linear combination of v1, v2: x = c1v1 + c2v2 Form the associated augmented matrix and row reduce:  3 −1 36 0 12 2 1 7  ∼  1 0 20 1 30 0 0  The system is consistent with solution c1 = 2 and c2 = 3. Therefore, x is in W, and the B-coordinates of x are [x]B = [23 ] Example 18.4. What are the coordinates of v =  311 −7  in the standard basis E = {e1, e2, e3}? Solution. Clearly, v =  311 −7  = 3  100  + 11  010  − 7  001  >
{ "page_id": null, "source": 6828, "title": "from dpo" }
139 Coordinate Systems Therefore, the coordinate vector of v relative to {e1, e2, e3} is [v]E =  311 −7  Example 18.5. Let P3[t] be the vector space of polynomials of degree at most 3. (i) Show that B = {1, t, t 2, t 3} is a basis for P3[t]. (ii) Find the coordinates of v(t) = 3 − t2 − 7t3 relative to B. Solution. The set B = {1, t, t 2, t 3} is a spanning set for P3[t]. Indeed, any polynomial u(t) = c0 + c1t + c2t2 + c3t3 is clearly a linear combination of 1 , t, t 2, t 3. Is B linearly independent? Suppose that there exists scalars c0, c 1, c 2, c 3 such that c0 + c1t + c2t2 + c3t3 = 0 . Since the above equality must hold for all values of t, we conclude that c0 = c1 = c2 = c3 = 0. Therefore, B is linearly independent, and consequently a basis for P3[t]. In the basis B, the coordinates of v(t) = 3 − t2 − 7t3 are [v(t)] B =  30 −1 −7  The basis B = {1, t, t 2, t 3} is called the standard basis in P3[t]. Example 18.6. Show that B = {[ 1 00 0 ] , [0 10 0 ] , [0 01 0 ] , [0 00 1 ]} is a basis for M2×2. Find the coordinates of A = [ 3 0 −4 −1 ] relative to B. Solution. Any matrix M = [m11 m12 m21 m22 ] can be written as a linear combination of the ma-trices in B: [m11 m12 m21 m22 ] = m11 [1 00 0 ] + m12 [0 10 0 ] + m21 [0 01
{ "page_id": null, "source": 6828, "title": "from dpo" }
0 ] + m22 [0 00 1 ] If c1 [1 00 0 ] + c2 [0 10 0 ] + c3 [0 01 0 ] + c4 [0 00 1 ] = [c1 c2 c3 c4 ] = [0 00 0 ] > 140 Lecture 18 then clearly c1 = c2 = c3 = c4 = 0. Therefore, B is linearly independent, and consequently a basis for M2×2. The coordinates of A = [ 3 0 −4 −1 ] in the basis B = {[ 1 00 0 ] , [0 10 0 ] , [0 01 0 ] , [0 00 1 ]} are [A]B =  30 −4 −1  The basis B above is the standard basis of M2×2. # 18.2 Coordinate Mappings Let B = {v1, v2, . . . , vn} be a basis of Rn and let P = [ v1 v2 · · · vn] ∈ Mn×n. If x ∈ Rn and [x]B are the B-coordinates of x relative to B then x = P[x]B. (⋆)Hence, thinking of P : Rn → Rn as a linear mapping, P maps B-coordinate vectors to coordinate vectors relative to the standard basis of Rn. For this reason, we call P the change-of-coordinates matrix from the basis B to the standard basis in Rn. If we need to emphasize that P is constructed from the basis B we will write PB instead of just P.Multiplying equation ( ⋆) by P−1 we obtain P−1x = [ x]B . Therefore, P−1 maps coordinate vectors in the standard basis to coordinates relative to B. Example 18.7. The columns of the matrix P form a basis B for R3: P =  1 3 3 −1 −4 −20 0 −1  . (a) What vector x ∈ R3 has B-coordinates
{ "page_id": null, "source": 6828, "title": "from dpo" }
[ x]B = (1 , 0, −1). (b) Find the B-coordinates of v = (2 , −1, 0). Solution. The matrix P maps B-coordinates to standard coordinates in R3. Therefore, x = P[x]B =  −211  > 141 Coordinate Systems On the other hand, the inverse matrix P−1 maps standard coordinates in R3 to B-coordinates. One can verify that P−1 =  4 3 6 −1 −1 −10 0 −1  Therefore, the B coordinates of v are [v]B = P−1v =  4 3 6 −1 −1 −10 0 −1  2 −10  =  5 −10  When V is an abstract vector space, e.g. Pn[t] or Mn×n, the notion of a coordinate mapping is similar as the case when V = Rn. If V is an n-dimensional vector space and B = {v1, v2, . . . , vn} is a basis for V, we define the coordinate mapping P : V → Rn relative to B as the mapping P(v) = [ v]B. Example 18.8. Let V = M2×2 and let B = {A1, A2, A3, A4} be the standard basis for M2×2. What is P : M2×2 → R4? Solution. Recall, B = {A1, A2, A3, A4} = {[ 1 00 0 ] , [0 10 0 ] , [0 01 0 ] , [0 00 1 ]} Then for any A = [a11 a12 a21 a22 ] we have P ([ a11 a12 a21 a22 ]) =  a11 a12 a21 a22  . # 18.3 Matrix Representation of a Linear Map Let V and W be vector spaces and let T : V → W be a linear mapping. Then by definition of a linear mapping, T(v + u) = T(v) + T(u) and T(αv) = αT(v) for every v, u
{ "page_id": null, "source": 6828, "title": "from dpo" }
∈ V and α ∈ R. Let B = {v1, v2, . . . , vn} be a basis of V and let γ = {w1, w2, . . . , wm} be a basis of W. Then for any v ∈ V there exists scalars c1, c 2, . . . , c n such that v = c1v1 + c2v2 + · · · + cnvn > 142 Lecture 18 and thus [ v]B = ( c1, c 2, . . . , c n) are the coordinates of v in the basis B By linearity of the mapping T we have T(v) = T(c1v1 + c2v2 + · · · + cnvn)= c1T(v1) + c2T(v2) + · · · + cnT(vn)Now each vector T(vj ) is in W and therefore because γ is a basis of W there are scalars a1,j , a 2,j , . . . , a m,j such that T(vj ) = a1,j w1 + a2,j w2 + · · · + am,j wm In other words, [T(vj )] γ = ( a1,j , a 2,j , . . . , a m,j )Substituting T(vj ) = a1,j w1 + a2,j w2 + · · · + am,j wm for each j = 1 , 2, . . . , n into T(v) = c1T(v1) + c2T(v2) + · · · + cnT(vn)and then simplifying we get T(v) = > m ∑ > i=1 ( n∑ > j=1 cj ai,j ) wi Therefore, [T(v)] γ = A[v]B where A is the m × n matrix given by A = [[T(v1)] γ [T(v2)] γ · · · [T(vn)] γ ] The matrix A is the matrix representation of the linear mapping T in the bases B and γ. Example 18.9. Consider the vector space
{ "page_id": null, "source": 6828, "title": "from dpo" }
V = P2[t] of polynomial of degree no more than two and let T : V → V be defined by T(v(t)) = 4 v′(t) − 2v(t)It is straightforward to verify that T is a linear mapping. Let B = {v1, v2, v3} = {t − 1, 3 + 2 t, t 2 + 1 }. (a) Verify that B is a basis of V.(b) Find the coordinates of v(t) = −t2 + 3 t + 1 in the basis B.(c) Find the matrix representation of T in the basis B. > 143 Coordinate Systems Solution. (a) Suppose that there are scalars c1, c 2, c 3 such that c1v1 + c2v2 + c3v3 = 0 Then expanding and then collecting like terms we obtain c3t2 + ( c1 + 2 c2)t + ( −c1 + 3 c2 + c3) = 0 Since the above holds for all t ∈ R we must have c3 = 0 , c1 + 2 c2 = 0 , −c1 + 3 c2 + c3 = 0 Solving for c1, c 2, c 3 we obtain c1 = 0 , c 2 = 0 , c 3 = 0. Hence, the only linear combination of the vectors in B that produces the zero vector is the trivial linear combination. This proves by definition that B is linearly independent. Since we already know that dim( P2) = 3 and B contains 3 vectors, then B is a basis for P2 (b) The coordinates of v(t) = −t2 + 3 t + 1 are the unique scalars ( c1, c 2, c 3) such that c1v1 + c2v2 + c3v3 = v In this case the linear system is c3 = −1, c1 + 2 c2 = 3 , −c1 + 3 c2 + c3 =
{ "page_id": null, "source": 6828, "title": "from dpo" }
1 and solving yields c1 = 1 , c 2 = 1, and c3 = −1. Hence, [v]B = (1 , 1, −1) (c) The matrix representation A of T is A = [[T(v1)] B [T(v2)] B [T(v3)] B ] Now we compute directly that T(v1) = −2t + 6 , T(v2) = −4t + 2 , T(v3) = −2t2 + 8 t − 2And then one computes that [T(v1)] B =  −18 /54/50  , [T(v2)] B =  −6/5 −2/50  , [T(v3)] B =  24 /58/5 −2  And therefore A =  −18 /5 −6/5 24 /54/5 −2/5 8/50 0 −2  > 144 Lecture 18 After this lecture you should know the following: • what coordinates are (you need a basis) • how to find coordinates relative to a basis • the interpretation of the change-of-coordinates matrix as a mapping that transforms one set of coordinates to another > 145 Coordinate Systems > 146 # Lecture 19 Change of Basis # 19.1 Review of Coordinate Mappings on Rn Let B = {v1, . . . , vn} be a basis for Rn and let PB = [ v1 v2 · · · vn]. If x ∈ Rn and [ x]B is the coordinate vector of x in the basis B then x = PB[x]B . The components of the vector x are the coordinates of x in the standard basis E = {e1, . . . , en}.In other words, [x]E = x. Therefore, [x]E = PB[x]B. We can therefore interpret PB as the matrix mapping that maps the B-coordinates of x to the E-coordinates of x. To make this more explicit, we sometimes use the notation > E PB to indicate that E PB maps B-coordinates to E-coordinates: [x]E = ( E
{ "page_id": null, "source": 6828, "title": "from dpo" }
PB)[ x]B . If we multiply the equation [x]E = ( E PB)[ x]B on the left by the inverse of E PB we obtain (E PB)−1[x]E = [ x]B Hence, the matrix ( E PB)−1 maps standard coordinates to B-coordinates, see Figure 19.1 . It is natural then to introduce the notation > B PE = ( E PB)−1Change of Basis > b > x > b > [x]B > V=Rn > B PE = ( E PB)−1 Figure 19.1: The matrix BPE maps E coordinates to B coordinates. Example 19.1. Let v1 =  100  , v2 =  −340  , v2 =  3 −63  , x =  −823  . (a) Show that the set of vectors B = {v1, v2, v3} forms a basis for Rn.(b) Find the change-of-coordinates matrix from B to standard coordinates. (c) Find the coordinate vector [ x]B for the given x. Solution. Let PB =  1 −3 30 4 −60 0 3  It is clear that det( PB) = 12, and therefore v1, v2, v3 are linearly independent. Therefore, B is a basis for Rn. The matrix PB takes B-coordinates to standard coordinates. The B-coordinate vector [ x]B = ( c1, c 2, c 3) is the unique solution to the linear system x = PB[x]B Solving the linear system with augmented matrix [ PB x] we obtain [x]B = ( −5, 2, 1) We verify that [ x]B = ( −5, 2, 1) are indeed the coordinates of x = ( −8, 2, 3) in the basis > 148 Lecture 19 B = {v1, v2, v3}:(−5) v1 + (2) v2 + (1) v3 = −5  100  + 2  −340  +  3 −63  =  −500  +
{ "page_id": null, "source": 6828, "title": "from dpo" }
 −680  +  3 −63  =  −823 ︸ ︷︷ ︸ > x # 19.2 Change of Basis We saw in the previous section that the matrix > E PB takes as input the B-coordinates [ x]B of a vector x and returns the coordinates of x in the standard basis. We now consider the situation of dealing with two basis B and C where neither is assumed to be the standard basis E. Hence let B = {v1, v2, . . . , vn} and let C = {w1, . . . , wn} be two basis of Rn and let > E PB = [ v1 v2 · · · vn] > E PC = [ w1 w2 · · · wn]. Then if [ x]C is the coordinate vector of x in the basis C then x = ( E PC )[ x]C . How do we transform B-coordinates of x to C-coordinates of x, and vice-versa? To answer this question, start from the relations x = ( E PB)[ x]B x = ( E PC )[ x]C . Then (E PC )[ x]C = ( E PB)[ x]B and because E PC is invertible we have that [x]C = ( E PC )−1(E PB)[ x]B. > 149 Change of Basis Hence, the matrix ( E PC)−1(E PB) maps the B-coordinates of x to the C-coordinates of x. For this reason, it is natural to use the notation (see Figure 19.2 ) > C PB = ( E PC)−1(E PB). > b x > bb [x]C [x]BC PB V = Rn > E PBE PC Figure 19.2: The matrix C PB maps B-coordinates to C-coordinates. If we expand ( E PC )−1(E PB) we obtain that (E PC )−1(E PB) = [(E PC )−1v1
{ "page_id": null, "source": 6828, "title": "from dpo" }
(E PC )−1v2 · · · (E PC)−1vn ] . Therefore, the ith column of ( E PC )−1(E PB), namely (E PC )−1vi, is the coordinate vector of vi in the basis C = {w1, w2, . . . , wn}. To compute C PB we augment E PC and E PB and row reduce fully: [E PC E PB ] ∼ [In C PB ] . Example 19.2. Let B = {[ 1 −3 ] , [−24 ]} , C = {[ −79 ] , [−57 ]} It can be verified that B = {v1, v2} and C = {w1, w2} are bases for R2.(a) Find the matrix the takes B-coordinates to C-coordinates. (b) Find the matrix that takes C-coordinates to B-coordinates. (c) Let x = (0 , −2). Find [ x]B and [ x]C . Solution. The matrix E PB = [ v1 v2] maps B-coordinates to standard E-coordinates. The matrix E PC = [ w1 w2] maps C-coordinates to standard E-coordinates. As we just showed, the matrix that maps B-coordinates to C-coordinates is > C PB = ( E PC )−1(E PB) > 150 Lecture 19 It is straightforward to compute that (E PC)−1 = [−7/4 −5/49/4 7/4 ] Therefore, > C PB = ( E PC)−1(E PB) = [−7/4 −5/49/4 7/4 ] [ 1 −2 −3 4 ] = [ 2 −3/2 −3 5/2 ] To compute BPC , we can simply invert C PB. One finds that (C PB)−1 = [5 36 4 ] and therefore > B PC = [5 36 4 ] Given that x = (0 , −2), to find [ x]B we must solve the linear system > E PB[x]B = x Row reducing the augmented matrix [ E PB x] we obtain [x]B = [21 ] Next, to find
{ "page_id": null, "source": 6828, "title": "from dpo" }
[ x]C we can solve the linear system > E PC [x]C = x Alternatively, since we now know [ x]B and C PB has been computed, to find [ x]C we simply multiply C PB by [ x]B:[x]C = C PB[x]B = [ 2 −3/2 −3 5/2 ] [ 21 ] = [ 5/2 −7/2 ] Let’s verify that [ x]C = [ 5/2 −7/2 ] are indeed the C-coordinates of x = [ 0 −2 ] : > E PC [x]C = [−7 −59 7 ] [ 5/2 −7/2 ] = [ 0 −2 ] . After this lecture you should know the following: • how to compute a change of basis matrix • and how to use the change of basis matrix to map one set of coordinates into another > 151 Change of Basis > 152 Lecture 20 # Lecture 20 Inner Products and Orthogonality # 20.1 Inner Product on Rn The inner product on Rn generalizes the notion of the dot product of vectors in R2 and R3 that you may are already familiar with. Definition 20.1: Let u = ( u1, u 2, . . . , u n) and let v = ( v1, v 2, . . . , v n) be vectors in Rn.The inner product of u and v is u • v = u1v1 + u2v2 + · · · + unvn. Notice that the inner product u • v can be computed as a matrix multiplication as follows: u • v = uT v = [u1 u2 · · · un ] v1 v2 ... vn  . The following theorem summarizes the basic algebraic properties of the inner product. Theorem 20.2: Let u, v, w be vectors in Rn and let α be a scalar. Then (a)
{ "page_id": null, "source": 6828, "title": "from dpo" }
u • v = v • u (b) (u + v) • w = u • w + v • w (c) (αu) • v = α(u • v) = u • (αv) (d) u • u ≥ 0, and u • u = 0 if and only if u = 0 153 Inner Products and Orthogonality Example 20.3. Let u = (2 , −5, −1) and let v = (3 , 2, −3). Compute u • v, v • u, u • u, and v • v. Solution. By definition: u • v = (2)(3) + ( −5)(2) + (1)( −3) = −1 v • u = (3)(2) + (2)( −5) + ( −3)(1) = −1 u • u = (2)(2) + ( −5)( −5) + ( −1)( −1) = 30 v • v = (3)(3) + (2)(2) + ( −3)( −3) = 22 . We now define the length or norm of a vector in Rn. Definition 20.4: The length or norm of a vector u ∈ Rn is defined as ‖u‖ = √u • u = √ u21 + u22 + · · · + u2 > n . A vector u ∈ Rn with norm 1 will be called a unit vector : ‖u‖ = 1 . Below is an important property of the inner product. Theorem 20.5: Let u ∈ Rn and let α be a scalar. Then ‖αu‖ = |α|‖ u‖. Proof. We have ‖αu‖ = √(αu) • (αu)= √α2(u • u)= |α|√u • u = |α|‖ u‖. By Theorem 20.5 , any non-zero vector u ∈ Rn can be scaled to obtain a new unit vector in the same direction as u. Indeed, suppose that u is non-zero so that ‖u‖ 6 = 0. Define the new vector v = 1 ‖u‖ u
{ "page_id": null, "source": 6828, "title": "from dpo" }
> 154 Lecture 20 Notice that α = 1 > ‖u‖ is just a scalar and thus v is a scalar multiple of u. Then by Theorem 20.5 we have that ‖v‖ = ‖αu‖ = |α| · ‖ u‖ = 1 ‖u‖ · ‖ u‖ = 1 and therefore v is a unit vector, see Figure 20.1 . The process of taking a non-zero vector u and creating the new vector v = 1 > ‖u‖ u is sometimes called normalization of u. u v = 1 > ‖u‖ u Figure 20.1: Normalizing a non-zero vector. Example 20.6. Let u = (2 , 3, 6). Compute ‖u‖ and find the unit vector v in the same direction as u. Solution. By definition, ‖u‖ = √u • u = √22 + 3 2 + 6 2 = √49 = 7 . Then the unit vector that is in the same direction as u is v = 1 ‖u‖u = 1 7  236  =  2/73/76/7  Verify that ‖v‖ = 1: ‖v‖ = √(2 /7) 2 + (3 /7) 2 + (6 /7) 2 = √4/49 + 9 /49 + 36 /49 = √49 /49 = √1 = 1 . Now that we have the definition of the length of a vector, we can define the notion of distance between two vectors. Definition 20.7: Let u and v be vectors in Rn. The distance between u and v is the length of the vector u − v. We will denote the distance between u and v by d( u, v). In other words, d( u, v) = ‖u − v‖. Example 20.8. Find the distance between u = [ 3 −2 ] and v = [ 7 −9 ] . Solution. We compute: d( u, v) = ‖u −
{ "page_id": null, "source": 6828, "title": "from dpo" }
v‖ = √(3 − 7) 2 + ( −2 + 9) 2 = √65 . > 155 Inner Products and Orthogonality # 20.2 Orthogonality In the context of vectors in R2 and R3, orthogonality is synonymous with perpendicularity. Below is the general definition. Definition 20.9: Two vectors u and v in Rn are said to be orthogonal if u • v = 0. In R2 and R3, the notion of orthogonality should be familiar to you. In fact, using the Law of Cosines in R2 or R3, one can prove that u • v = ‖u‖ · ‖ v‖ cos( θ) (20.1) where θ is the angle between u and v. If θ = π > 2 then clearly u • v = 0. In higher dimensions, i.e., n ≥ 4, we can use equation ( 20.1 ) to define the angle between vectors u and v. In other words, the angle between any two vectors u and v in Rn is define to be θ = arccos ( u • v ‖u‖ · ‖ v‖ ) . The general notion of orthogonality in Rn leads to the following theorem from grade school. Theorem 20.10: (Pythagorean Theorem) Two vectors u and v are orthogonal if and only if ‖u + v‖2 = ‖u‖2 + ‖v‖2. Solution. First recall that ‖u + v‖ = √(u + v) • (u + v) and therefore ‖u + v‖2 = ( u + v) • (u + v)= u • u + u • v + v • u + v • v = ‖u‖2 + 2( u • v) + ‖v‖2. Therefore, ‖u + v‖2 = ‖u‖2 + ‖v‖2 if and only if u • v = 0. We now introduce orthogonal sets. Definition 20.11: A set of vectors {u1, u2, . .
{ "page_id": null, "source": 6828, "title": "from dpo" }
. , up} is said to be an orthogonal set if any pair of distinct vectors ui, uj are orthogonal, that is, ui • uj = 0 whenever i 6 = j.In the following theorem we prove that orthogonal sets are linearly independent. > 156 Lecture 20 Theorem 20.12: Let {u1, u2, . . . , up} be an orthogonal set of non-zero vectors in Rn.Then the set {u1, u2, . . . , up} is linearly independent. In particular, if p = n then the set {u1, u2, . . . , un} is basis for Rn. Solution. Suppose that there are scalars c1, c 2, . . . , c p such that c1u1 + c2u2 + · · · + cpup = 0. Take the inner product of u1 with both sides of the above equation: c1(u1 • u1) + c2(u2 • u1) + · · · + cp(up • u1) = 0 • u1. Since the set is orthogonal, the left-hand side of the last equation simplifies to c1(u1 • u1). The right-hand side simplifies to 0. Hence, c1(u1 • u1) = 0. But u1 • u1 = ‖u1‖2 is not zero and therefore the only way that c1(u1 • u2) = 0 is if c1 = 0. Repeat the above steps using u2, u3, . . . , up and conclude that c2 = 0, c3 = 0 , . . . , c p =0. Therefore, {u1, . . . , up} is linearly independent. If p = n, then the set {u1, . . . , up} is automatically a basis for Rn. Example 20.13. Is the set {u1, u2, u3} an orthogonal set? u1 =  1 −21  , u2 =  012  , u3 =  −5 −21 
{ "page_id": null, "source": 6828, "title": "from dpo" }
Solution. Compute u1 • u2 = (1)(0) + ( −2)(1) + (1)(2) = 0 u1 • u3 = (1)( −5) + ( −2)( −2) + (1)(1) = 0 u2 • u3 = (0)( −5) + (1)( −2) + (2)(1) = 0 Therefore, {u1, u2, u3} is an orthogonal set. By Theorem 20.12 , the set {u1, u2, u3} is linearly independent. To verify linear independence, we computed that det( [u1 u2 u3 ]) = 30, which is non-zero. > 157 Inner Products and Orthogonality We now introduce orthonormal sets. Definition 20.14: A set of vectors {u1, u2, . . . , up} is said to be an orthonormal set if it is an orthogonal set and if each vector ui in the set is a unit vector. Consider the previous orthogonal set in R3: {u1, u2, u3} =  1 −21  ,  012  ,  −5 −21  . It is not an orthonormal set because none of u1, u2, u3 are unit vectors. Explicitly, ‖u1‖ =√6, ‖u2‖ = √5, and ‖u3‖ = √30. However, from an orthogonal set we can create an orthonormal set by normalizing each vector. Hence, the set {v1, v2, v3} =  1/√6 −2/√61/√6  ,  01/√52/√5  ,  −5/√30 −2/√30 1/√30  is an orthonormal set. # 20.3 Coordinates in an Orthonormal Basis As we will see in this section, a basis B = {u1, u2, . . . , un} of Rn that is also an orthonormal set is highly desirable when performing computations with coordinates. To see why, let x be any vector in Rn and suppose we want to find the coordinates of x in the basis B, that is we seek to find [ x]B = ( c1, c 2, . . . , c
{ "page_id": null, "source": 6828, "title": "from dpo" }
n). By definition, the coordinates c1, c 2, . . . , c n satisfy the equation x = c1u1 + c2u2 + · · · + cnun. Taking the inner product of u1 with both sides of the above equation and using the fact that u1 • u2 = 0, u1 • u3 = 0, and u1 • un = 0, we obtain u1 • x = c1(u1 • u1) = c1(1) = c1 where we also used the fact that ui is a unit vector. Thus, c1 = u1 • x! Repeating this procedure with u2, u3, . . . , un we obtain the remaining coefficients c2, . . . , c n: c2 = u2 • x c3 = u3 • x ... = ... cn = un • x. Our previous computation proves the following theorem. > 158 Lecture 20 Theorem 20.15: Let B = {u1, u2, . . . , un} be an orthonormal basis for Rn. The coordi-nate vector of x in the basis B is [x]B =  u1 • xu2 • x ... un • x  . Hence, computing coordinates with respect to an orthonormal basis can be done without performing any row operations and all we need to do is compute inner products! We make the important observation that an alternate expression for [ x]B is [x]B =  u1 • xu2 • x ... un • x  =  uT > 1 uT > 2 ... uTn  x = UT x where U = [ u1 u2 · · · un]. On the other hand, recall that by definition [ x]B satisfies U[x]B = x, and therefore [ x]B = U−1x. If we compare the two identities [x]B = U−1x and [x]B = UT x we
{ "page_id": null, "source": 6828, "title": "from dpo" }
suspect then that U−1 = UT . This is indeed the case. To see this, let B = {u1, u2, . . . , un} be an orthonormal basis for Rn and put U = [ u1 u2 · · · un]. Consider the matrix product UT U, and recalling that ui • uj = uTi uj , we obtain UT U =  uT > 1 uT > 2 ... uTn [u1 u2 · · · un ] =  uT > 1 u1 uT > 1 u2 · · · uT > 1 un uT > 2 u1 uT > 2 u2 · · · uT > 2 un ... ... . . . ... uTn u1 uTn u2 · · · uTn un  = In. > 159 Inner Products and Orthogonality Therefore, U−1 = UT . A matrix U ∈ Rn×n such that UT U = UU T = In is called a orthogonal matrix . Hence, if B = {u1, u2, . . . , un} is an orthonormal set then the matrix U = [u1 u2 · · · un ] is an orthogonal matrix. Example 20.16. Consider the vectors v1 =  101  , v2 =  −141  , v3 =  21 −2  , x =  12 −1  . (a) Show that {v1, v2, v3} is an orthogonal basis for R3.(b) Then, if necessary, normalize the basis vectors vi to obtain an orthonormal basis B = {u1, u2, u3} for R3.(c) For the given x find [ x]B. Solution. (a) We compute that v1 • v2 = 0, v1 • v3 = 0, and v2 • v3 = 0, and thus {v1, v2, v3} is an orthogonal set. Since orthogonal sets are linearly independent and {v1, v2,
{ "page_id": null, "source": 6828, "title": "from dpo" }
v3} consists of three vectors then {v1, v2, v3} is basis for R3.(b) We compute that ‖v1‖ = √2, ‖v2‖ = √18, and ‖v3‖ = 3. Then let u1 =  1/√201/√2  , u2 =  −1/√18 4/√18 1/√18  , u3 =  2/31/3 −2/3  Then B = {u1, u2, u3} is now an orthonormal set and thus since B consists of three vectors then B is an orthonormal basis of R3.(c) Finally, computing coordinates in an orthonormal basis is easy: [x]B =  u1 • xu2 • xu3 • x  =  02/√18 5/3  Example 20.17. The standard unit basis E = {e1, e2, e3} =  100  ,  010  ,  001  > 160 Lecture 20 in R3 is an orthonormal basis. Given any x = ( x1, x 2, x 3), we have [ x]E = x. On the other hand, clearly x1 = x • e1 x2 = x • e2 x3 = x • e3 Example 20.18. (Orthogonal Complements) Let W be a subspace of Rn. The orthogonal complement of W, which we denote by W⊥, consists of the vectors in Rn that are orthogonal to every vector in W. Using set notation: W⊥ = {u ∈ Rn : u • w = 0 for every w ∈ W}. (a) Show that W⊥ is a subspace. (b) Let w1 = (0 , 1, 1, 0), let w2 = (1 , 0, −1, 0), and let W = span {w1, w2}. Find a basis for W⊥. Solution. (a) The vector 0 is orthogonal to every vector in Rn and therefore it is certainly orthogonal to every vector in W. Thus, 0 ∈ W⊥. Now suppose that u1, u2 are two vectors in W⊥. Then for any vector
{ "page_id": null, "source": 6828, "title": "from dpo" }
w ∈ W it holds that (u1 + u2) • w = u1 • w + u2 • w = 0 + 0 = 0 . Therefore, u1 + u2 is also orthogonal to w and since w is an arbitrary vector in W then (u1 + u2) ∈ W⊥. Lastly, let α be any scalar and let u ∈ W⊥. Then for any vector w in W we have that (αu) • w = α(u • w) = α · 0 = 0 . Therefore, αu is orthogonal to w and since w is an arbitrary vector in W then ( αu) ∈ W⊥.This proves that W⊥ is a subspace of Rn.(b) A vector u = ( u1, u 2, u 3, u 3) is in W⊥ if u • w1 = 0 and u • w2 = 0. In other words, if u2 + u3 = 0 u1 − u3 = 0 This is a linear system for the unknowns u1, u 2, u 3, u 4. The general solution to the linear system is u = t  1010  + s  01 −10  . Therefore, a basis for W⊥ is {(1 , 0, 1, 0) , (0 , 1, −1, 0) }. After this lecture you should know the following: > 161 Inner Products and Orthogonality • how to compute inner products, norms, and distances • how to normalize vectors to unit length • what orthogonality is and how to check for it • what an orthogonal and orthonormal basis is • the advantages of working with orthonormal basis when computing coordinate vectors > 162 Lecture 21 # Lecture 21 Eigenvalues and Eigenvectors # 21.1 Eigenvectors and Eigenvalues An n × n matrix A can be thought of as the linear mapping that takes
{ "page_id": null, "source": 6828, "title": "from dpo" }
any arbitrary vector x ∈ Rn and outputs a new vector Ax . In some cases, the new output vector Ax is simply a scalar multiple of the input vector x, that is, there exists a scalar λ such that Ax = λx.This case is so important that we make the following definition. Definition 21.1: Let A be a n × n matrix and let v be a non-zero vector. If Av = λv for some scalar λ then we call the vector v an eigenvector of A and we call the scalar λ an eigenvalue of A corresponding to v.Hence, an eigenvector v of A is simply scaled by a scalar λ under multiplication by A.Eigenvectors are by definition nonzero vectors because A0 is clearly a scalar multiple of 0 and then it is not clear what that the corresponding eigenvalue should be. Example 21.2. Determine if the given vectors v and u are eigenvectors of A? If yes, find the eigenvalue of A associated to the eigenvector. A =  4 −1 62 1 62 −1 8  , v =  −301  , u =  −121  . Solution. Compute Av =  4 −1 62 1 62 −1 8  −301  =  −602  = 2  −301  = 2 v > 163 Eigenvalues and Eigenvectors Hence, Av = 2 v and thus v is an eigenvector of A with corresponding eigenvalue λ = 2. On the other hand, Au =  4 −1 62 1 62 −1 8  −121  =  064  . There is no scalar λ such that  064  = λ  −121  . Therefore, u is not an eigenvector of A. Example 21.3. Is v an eigenvector of A? If yes,
{ "page_id": null, "source": 6828, "title": "from dpo" }
find the eigenvalue of A associated to v: A =  2 −1 −1 −1 2 −1 −4 2 2  , v =  111  . Solution. We compute Av =  000  = 0. Hence, if λ = 0 then λv = 0 and thus Av = λv. Therefore, v is an eigenvector of A with corresponding eigenvalue λ = 0. How does one find the eigenvectors/eigenvalues of a matrix A? The general procedure is to first find the eigenvalues of A and then for each eigenvalue find the corresponding eigenvectors. In this section, however, we will instead suppose that we have already found the eigenvalues of A and concern ourselves with finding the associated eigenvectors. Suppose then that λ is known to be an eigenvalue of A. How do we find an eigenvector v corresponding to the eigenvalue λ? To answer this question, we note that if v is to be an eigenvector of A with eigenvalue λ then v must satisfy the equation Av = λv. We can rewrite this equation as Av − λv = 0 which, after using the distributive property of matrix multiplication, is equivalent to (A − λI)v = 0. The last equation says that if v is to be an eigenvector of A with eigenvalue λ then v must be in the null space of A − λI: v ∈ Null( A − λI). > 164 Lecture 21 In summary, if λ is known to be an eigenvalue of A, then to find the eigenvectors corre-sponding to λ we must solve the homogeneous system (A − λI)x = 0. Recall that the null space of any matrix is a subspace and for this reason we call the subspace Null( A − λI) the eigenspace of A corresponding to λ.
{ "page_id": null, "source": 6828, "title": "from dpo" }
Example 21.4. It is known that λ = 4 is an eigenvalue of A =  −4 6 31 7 98 −6 1  . Find a basis for the eigenspace of A corresponding to λ = 4. Solution. First compute A − 4I =  −4 6 31 7 98 −6 1  −  4 0 00 4 00 0 4  =  −8 6 31 3 98 −6 −3  Find a basis for the null space of A − 4I:  −8 6 31 3 98 −6 −3  R1lR2 −−−→  1 3 9 −8 6 38 −6 −3  1 3 9 −8 6 38 −6 −3  > 8R1+R2 > −8R1+R3 −−−−−−→  1 3 90 30 75 0 −30 −75  Finally,  1 3 90 30 75 0 −30 −75  R2+R3 −−−−→  1 3 90 30 75 0 0 0  Hence, the general solution to the homogenous system ( A − 4I)x = 0 is x = t  −3/2 −5/21  where t is an arbitrary scalar. Therefore, the eigenspace of A corresponding to λ = 4 is span  −3/2 −5/21  = span  −3 −52  = span {v} and {v} is a basis for the eigenspace. The vector v is of course an eigenvector of A with eigenvalue λ = 4 and also (of course) any multiple of v is also eigenvector of A with λ = 4. > 165 Eigenvalues and Eigenvectors Example 21.5. It is known that λ = 3 is an eigenvalue of A =  11 −4 −84 1 −48 −4 −5  . Find the eigenspace of A corresponding to λ = 3. Solution. First compute A − 3I =  11 −4 −84 1 −48 −4
{ "page_id": null, "source": 6828, "title": "from dpo" }
−5  −  3 0 00 3 00 0 3  =  8 −4 −84 −2 −48 −4 −8  Now find the null space of A − 3I:  8 −4 −84 −2 −48 −4 −8  R1lR2 −−−→  4 −2 −48 −4 −88 −4 −8  4 −2 −48 −4 −88 −4 −8  > −2R1+R2 > −2R1+R3 −−−−−−→  4 −2 −40 0 00 0 0  Hence, any vector in the null space of A − 3I =  4 −2 −40 0 00 0 0  can be written as x = t1  101  + t2  120  Therefore, the eigenspace of A corresponding to λ = 3 is Null( A − 3I) = span {v1, v2} = span  101  ,  120  . The vectors v1 and v2 are two linearly independent eigenvectors of A with eigenvalue λ = 3. Therefore {v1, v2} is a basis for the eigenspace of A with eigenvalue λ = 3. You can verify that Av 1 = 3 v1 and Av 2 = 3 v2. As shown in the last example, there may exist more than one linearly independent eigen-vector of A corresponding to the same eigenvalue, in other words, it is possible that the dimension of the eigenspace Null( A − λI) is greater than one. What can be said about the eigenvectors of A corresponding to different eigenvalues? > 166 Lecture 21 Theorem 21.6: Let v1, . . . , vk be eigenvectors of A corresponding to distinct eigenvalues λ1, . . . , λ k of A. Then {v1, . . . , vk} is a linearly independent set. Solution. Suppose by contradiction that {v1, . . . , vk} is linearly dependent and {λ1, .
{ "page_id": null, "source": 6828, "title": "from dpo" }
. . , λ k} are distinct. Then, one of the eigenvectors vp+1 that is a linear combination of v1, . . . , vp,and {v1, . . . , vp} is linearly independent: vp+1 = c1v1 + c2v2 + · · · + cpvp. (21.1) Applying A to both sides we obtain Av p+1 = c1Av 1 + c2Av 2 + · · · + cpAv p and since Av i = λivi we can simplify this to λp+1 vp+1 = c1λ1v1 + c2λ2v2 + · · · + cpλpvp. (21.2) On the other hand, multiply ( 21.1 ) by λp+1 : λp+1 vp+1 = c1λp+1 v1 + c2λp+1 v2 + · · · + cpvpλp+1 . (21.3) Now subtract equations ( 21.2 ) and ( 21.3 ): 0 = c1(λ1 − λp+1 )v1 + c2(λ2 − λp+1 )v2 + · · · + cp(λp − λp+1 )vp. Now {v1, . . . , vp} is linearly independent and thus ci(λi − λp+1 ) = 0. But the eigenvalues {λ1, . . . , λ k} are all distinct and so we must have c1 = c2 = · · · = cp = 0. But from ( 21.1 )this implies that vp+1 = 0, which is a contradiction because eigenvectors are by definition non-zero. This proves that {v1, v2, . . . , vk} is a linearly independent set. Example 21.7. It is known that λ1 = 1 and λ2 = −1 are eigenvalues of A =  −4 6 31 7 98 −6 1  . Find bases for the eigenspaces corresponding to λ1 and λ2 and show that any two vectors from these distinct eigenspaces are linearly independent. Solution. Compute A − λ1I =  −5 6 31 6 98 −6 0  and one finds
{ "page_id": null, "source": 6828, "title": "from dpo" }
that (A − λ1I) = span  −3 −43  > 167 Eigenvalues and Eigenvectors Hence, v1 = ( −3, −4, 3) is an eigenvector of A with eigenvalue λ1 = 1, and {v1} forms a basis for the corresponding eigenspace. Next, compute A − λ2I =  −4 6 31 7 98 −6 1  +  1 0 00 1 00 0 1  =  −3 6 31 8 98 −6 2  and one finds that A − λ2I = span  −1 −11  Hence, v2 = ( −1, −1, 1) is an eigenvector of A with eigenvalue λ2 = −1, and {v2} forms a basis for the corresponding eigenspace. Now verify that v1 and v2 are linearly independent: [v1 v2 ] =  −3 −1 −4 −13 1  R1+R3 −−−−→  −3 −1 −4 −10 0  The last matrix has rank r = 2, and thus v1, v2 are indeed linearly independent. # 21.2 When λ = 0 is an eigenvalue What can we say about A if λ = 0 is an eigenvalue of A? Suppose then that A has eigenvalue λ = 0. Then by definition, there exists a non-zero vector v such that Av = 0 · v = 0. In other words, v is in the null space of A. Thus, A is not invertible (Why?). Theorem 21.8: The matrix A ∈ Rn×n is invertible if and only if λ = 0 is not an eigenvalue of A.In fact, later we will see that det( A) is the product of its eigenvalues. After this lecture you should know the following: • what eigenvalues are • what eigenvectors are and how to find them when eigenvalues are known • the behavior of a discrete dynamical system when the initial
{ "page_id": null, "source": 6828, "title": "from dpo" }
condition is set to an eigenvector of the system matrix > 168 Lecture 22 # Lecture 22 The Characteristic Polynomial # 22.1 The Characteristic Polynomial of a Matrix Recall that a number λ is an eigenvalue of A ∈ Rn×n if there exists a non-zero vector v such that Av = λv or equivalently if v ∈ Null( A − λI). In other words, λ is an eigenvalue of A if and only if the subspace Null( A − λI) contains a vector other than the zero vector. We know that any matrix M has a non-trivial null space if and only if M is non-invertible if and only if det( M) = 0. Hence, λ is an eigenvalue of A if and only if λ satisfies det( A − λI) = 0. Let’s compute the expression det( A − λI) for a generic 2 × 2 matrix: det( A − λI) = ∣∣∣∣a11 − λ a12 a21 a22 − λ ∣∣∣∣ = ( a11 − λ)( a22 − λ) − a12 a22 = λ2 − (a11 + a22 )λ + a11 a22 − a12 a22 . Thus, if A is 2 × 2 then det( A − λI) = λ2 − (a11 + a22 )λ + a11 a22 − a12 a22 is a polynomial in the variable λ of degree n = 2. This motivates the following definition. Definition 22.1: Let A be a n × n matrix. The polynomial p(λ) = det( A − λI)is called the characteristic polynomial of A. 169 The Characteristic Polynomial In summary, to find the eigenvalues of A we must find the roots of the characteristic poly-nomial: p(λ) = det( A − λI). The following theorem asserts that what we observed for the case n = 2 is indeed true for all n.
{ "page_id": null, "source": 6828, "title": "from dpo" }
Theorem 22.2: The characteristic polynomial p(λ) = det( A − λI) of a n × n matrix A is an nth degree polynomial. Solution. Recall that for the case n = 2 we computed that det( A − λI) = λ2 − (a11 + a22 )λ + a11 a22 − a12 a22 . Therefore, the claim holds for n = 2. By induction, suppose that the claims hold for n ≥ 2. If A is a ( n + 1) × (n + 1) matrix then expanding det( A − λI) along the first row: det( A − λI) = ( a11 − λ) det( A11 − λI) + > n ∑ > k=2 (−1) 1+ ka1k det( A1k − λI). By induction, each of det( A1k −λI) is a nth degree polynomial. Hence, ( a11 −λ) det( A11 −λI)is a ( n + 1)th degree polynomial. This ends the proof. Example 22.3. Find the characteristic polynomial of A = [−2 4 −6 8 ] . What are the eigenvalues of A? Solution. Compute A − λI = [−2 4 −6 8 ] − [λ 00 λ ] = [−2 − λ 4 −6 8 − λ ] . Therefore, p(λ) = det( A − λI)= ∣∣∣∣−2 − λ 4 −6 8 − λ ∣∣∣∣ = ( −2 − λ)(8 − λ) + 24 = λ2 − 6λ + 8 = ( λ − 4)( λ − 2) The roots of p(λ) are clearly λ1 = 4 and λ2 = 2. Therefore, the eigenvalues of A are λ1 = 4 and λ2 = 2. > 170 Lecture 22 Example 22.4. Find the eigenvalues of A =  −4 −6 −73 5 30 0 3  . Solution. Compute A − λI =  −4 −6 −73 5 30 0 3
{ "page_id": null, "source": 6828, "title": "from dpo" }
 −  λ 0 00 λ 00 0 λ  =  −4 − λ −6 −73 5 − λ 30 0 3 − λ  Then det( A − λI) = ( −4 − λ) ∣∣∣∣5 − λ 3 −λ 3 − λ ∣∣∣∣ − 3 ∣∣∣∣−6 −7 −λ 3 − λ ∣∣∣∣ = ( −4 − λ)[(3 − λ)(5 − λ) + 3 λ] − 3[ −6(3 − λ) − 7λ]= λ3 − 4λ2 + λ + 6 Factor the characteristic polynomial: p(λ) = λ3 − 4λ2 + λ + 6 = ( λ − 2)( λ − 3)( λ + 1) Therefore, the eigenvalues of A are λ1 = 2 , λ2 = 3 , λ3 = −1. Now that we know how to find eigenvalues, we can combine our work from the previous lecture to find both the eigenvalues and eigenvectors of a given matrix A. Example 22.5. For each eigenvalue of A from Example 22.4 , find a basis for the corre-sponding eigenspace. Solution. Start with λ1 = 2: A − 2I =  −6 −6 −73 3 30 0 1  After basic row reduction and back substitution, one finds that the null space of A − 2I is spanned by v1 =  −110  . > 171 The Characteristic Polynomial Therefore, v1 is an eigenvector of A with eigenvalue λ1. For λ2 = 3: A − 3I =  −7 −6 −73 2 30 0 0  The null space of A − 3I is spanned by v2 =  −101  and therefore v2 is an eigenvector of A with eigenvalue λ2. Finally, for λ3 = −1 we compute A − λ3I =  −3 −6 −73 6 30 0 4  and the null space of A − λ3I
{ "page_id": null, "source": 6828, "title": "from dpo" }
is spanned by v3 =  −210  and therefore v3 is an eigenvector of A with eigenvalue λ3. Notice that in this case, the 3 × 3matrix A has three distinct eigenvalues and the eigenvectors {v1, v2, v3} =  −110  ,  −101  ,  −210  correspond to the distinct eigenvalues λ1, λ 2, λ 3, respectively. Therefore, the set β = {v1, v2, v3} is linearly independent (by Theorem 21.6 ), and therefore β is a basis for R3. You can verify, for instance, that det([ v1 v2 v3]) 6 = 0. By Theorem 21.6 , the previous example has the following generalization. Theorem 22.6: Suppose that A is a n × n matrix and has n distinct eigenvalues λ1, λ 2, . . . , λ n. Let vi be an eigenvector of A corresponding to λi. Then {v1, v2, . . . , vn} is a basis for Rn.Hence, if A has distinct eigenvalues, we are guaranteed the existence of a basis of Rn consisting of eigenvectors of A. In forthcoming lectures, we will see that it is very convenient to work with matrices A that have a set of eigenvectors that form a basis of Rn; this is one of the main motivations for studying eigenvalues and eigenvectors in the first place. However, we will see that not every matrix has a set of eigenvectors that form a basis of Rn. For example, what if A does not have n distinct eigenvalues? In this case, does there exist a > 172 Lecture 22 basis for Rn of eigenvectors of A? In some cases, the answer is yes as the next example demonstrates. Example 22.7. Find the eigenvalues of A and a basis for each eigenspace. A =  2 0 04
{ "page_id": null, "source": 6828, "title": "from dpo" }
2 2 −2 0 1  Does R3 have a basis of eigenvectors of A? Solution. The characteristic polynomial of A is p(λ) = det( A − λI) = λ3 − 5λ2 + 8 λ − 4 = ( λ − 1)( λ − 2) 2 and therefore the eigenvalues are λ1 = 1 and λ2 = 2. Notice that although p(λ) is a polynomial of degree n = 3, it has only two distinct roots and hence A has only two distinct eigenvalues. The eigenvalue λ2 = 2 is said to be repeated and λ1 = 1 is said to be a simple eigenvalue. For λ1 = 1 one finds that the eigenspace Null( A − λ1I) is spanned by v1 =  0 −21  and thus v1 is an eigenvector of A with eigenvalue λ1 = 1. Now consider λ2 = 2: A − 2I =  0 0 04 0 2 −2 0 −1  Row reducing A − 2I one obtains A − 2I =  0 0 04 0 2 −2 0 −1  ∼  −2 0 −10 0 00 0 0  . Therefore, rank( A − 2I) = 1 and thus by the Rank Theorem it follows that Null( A − 2I) is a 2-dimensional eigenspace. Performing back substitution, one finds the following basis for the λ2-eigenspace: {v2, v3} =  −102  ,  010  Therefore, the eigenvectors {v1, v2, v3} =  0 −21  ,  −102  ,  010  form a basis for R3. Hence, for the repeated eigenvalue λ2 = 2 we were able to find two linearly independent eigenvectors. > 173 The Characteristic Polynomial Before moving further with more examples, we need to introduce some notation regard-ing the factorization of the characteristic
{ "page_id": null, "source": 6828, "title": "from dpo" }
polynomial. In the previous Example 22.7 , the characteristic polynomial was factored as p(λ) = ( λ − 1)( λ − 2) 2 and we found a basis for R3 of eigenvectors despite the presence of a repeated eigenvalue. In general, if p(λ) is an nth degree polynomial that can be completely factored into linear terms, then p(λ) can be written in the form p(λ) = ( λ − λ1)k1 (λ − λ2)k2 · · · (λ − λp)kp where k1, k 2, . . . , k p are positive integers and the roots of p(λ) are then λ1, λ 2, . . . , λ k. Because p(λ) is of degree n, we must have that k1 + k2 + · · · + kp = n. Motivated by this, we introduce the following definition. Definition 22.8: Suppose that A ∈ Mn×n has characteristic polynomial p(λ) that can be factored as p(λ) = ( λ − λ1)k1 (λ − λ2)k2 · · · (λ − λp)kp . The exponent ki is called the algebraic multiplicity of the eigenvalue λi. The dimension Null( A − λiI) of the eigenspace associated to λi is called the geometric multiplicity of λi.For simplicity and whenever it is convenient, we will denote the geometric multiplicity of the eigenvalue λi as gi = dim(Null( A − λiI)) . Example 22.9. A 6 × 6 matrix A has characteristic polynomial p(λ) = λ6 − 4λ5 − 12 λ4. Find the eigenvalues of A and their algebraic multiplicities. Solution. Factoring p(λ) we obtain p(λ) = λ4(λ2 − 4λ − 12) = λ4(λ − 6)( λ + 2) Therefore, the eigenvalues of A are λ1 = 0, λ2 = 6, and λ3 = −2. Their algebraic multiplic-ities are k1 = 4, k2 = 1, and k3 =
{ "page_id": null, "source": 6828, "title": "from dpo" }
1, respectively. The eigenvalue λ1 = 0 is repeated, while λ2 = 6 and λ3 = −2 are simple eigenvalues. In Example 22.7 , we had p(λ) = ( λ−1)( λ−2) 2 and thus λ1 = 1 has algebraic multiplicity k1 = 1 and λ2 = 2 has algebraic multiplicity k2 = 2. For λ1 = 1, we found one linearly independent eigenvector, and therefore λ1 has geometric multiplicity g1 = 1. For λ1 = 2, we found two linearly independent eigenvectors, and therefore λ2 has geometric multiplicity g2 = 2. However, as we will see in the next example, the geometric multiplicity gi is in general less than the algebraic multiplicity ki: gi ≤ ki > 174 Lecture 22 Example 22.10. Find the eigenvalues of A and a basis for each eigenspace: A =  2 4 3 −4 −6 −33 3 1  For each eigenvalue of A, find its algebraic and geometric multiplicity. Does R3 have a basis of eigenvectors of A? Solution. One computes p(λ) = −λ3 − 3λ2 + 4 = −(λ − 1)( λ + 2) 2 and therefore the eigenvalues of A are λ1 = 1 and λ2 = −2. The algebraic multiplicity of λ1 is k1 = 1 and that of λ2 is k2 = 2. For λ1 = 1 we compute A − I =  1 4 3 −4 −7 −33 3 0  and then one finds that v1 =  1 −11  is a basis for the λ1-eigenspace. Therefore, the geometric multiplicity of λ1 is g1 =. For λ2 = −2 we compute A − λ2I =  4 4 3 −4 −4 −33 3 3  ∼  4 4 31 1 10 0 0  ∼  1 1 10 0 10 0 0 
{ "page_id": null, "source": 6828, "title": "from dpo" }
Therefore, since rank( A − λ2I) = 2, the geometric multiplicity of λ2 = −2 is g2 = 1, which is less than the algebraic multiplicity k2 = 2. An eigenvector corresponding to λ2 = −2 is v2 =  −110  Therefore, for the repeated eigenvalue λ2 = −2, we are able to find only one linearly inde-pendent eigenvector. Therefore, it is not possible to construct a basis for R3 consisting of eigenvectors of A. Hence, in the previous example, there does not exist a basis of R3 of eigenvectors of A because for one of the eigenvalues (namely λ2) the geometric multiplicity was less than the algebraic multiplicity: g2 175 The Characteristic Polynomial # 22.2 Eigenvalues and Similarity Transformations To end this lecture, we will define a notion of similarity between matrices that plays an important role in linear algebra and that will be used in the next lecture when we dis-cuss diagonalization of matrices. In mathematics, there are many cases where one is inter-ested in classifying objects into categories or classes. Classifying mathematical objects into classes/categories is similar to how some physical objects are classified. For example, all fruits are classified into categories: apples, pears, bananas, oranges, avocados, etc. Given a piece of fruit A, how do you decide what category it is in? What are the properties that uniquely classify the piece of fruit A? In linear algebra, there are many objects of interest. We have spent a lot of time working with matrices and we have now reached a point in our study where
{ "page_id": null, "source": 6828, "title": "from dpo" }
we would like to begin classifying matrices. How should we decide if matrices A and B are of the same type or, in other words, are similar? Below is how we will decide. Definition 22.12: Let A and B be n × n matrices. We will say that A is similar to B if there exists an invertible matrix P such that A = PBP −1. If A is similar to B then B is similar to A because from the equation A = PBP −1 we can multiply on the left by P−1 and on the right by P to obtain that P−1AP = B. Hence, with Q = P−1, we have that B = QAQ −1 and thus B is similar to A. Hence, if A is similar to B then B is similar to A and therefore we simply say that A and B are similar .Matrices that are similar are clearly not necessarily equal. However, there is a reason why the word similar is used. Here are a few reasons why. Theorem 22.13: If A and B are similar matrices then the following are true: (a) rank( A) = rank( B)(b) det( A) = det( B)(c) A and B have the same eigenvalues Proof. We will prove part (c). If A and B are similar then A = PAP −1 for some matrix P.Then det( A − λI) = det( A − λPP −1)= det( PBP −1 − λPP −1)= det( P(B − λI)P−1)= det( P) det( B − λI) det( P−1)= det( B − λI) > 176 Lecture 22 Thus, A and B have the same characteristic polynomial, and hence the same eigenvalues. In the next lecture, we will see that if Rn has a basis of eigenvectors of A then A is similar to
{ "page_id": null, "source": 6828, "title": "from dpo" }
a diagonal matrix. After this lecture you should know the following: • what the characteristic polynomial is and how to compute it • how to compute the eigenvalues of a matrix • that when a matrix A has distinct eigenvalues, we are guaranteed a basis of Rn con-sisting of the eigenvectors of A • that when a matrix A has repeated eigenvalues, it is still possible that there exists a basis of Rn consisting of the eigenvectors of A • what is the algebraic multiplicity and geometric multiplicity of an eigenvalue • that eigenvalues of a matrix do not change under similarity transformations > 177 The Characteristic Polynomial > 178 # Lecture 23 Diagonalization # 23.1 Eigenvalues of Triangular Matrices Before discussing diagonalization, we first consider the eigenvalues of triangular matrices. Theorem 23.1: Let A be a triangular matrix (either upper or lower). Then the eigen-values of A are its diagonal entries. Proof. We will prove the theorem for the case n = 3 and A is upper triangular; the general case is similar. Suppose then that A is a 3 × 3 upper triangular matrix: A =  a11 a12 a13 0 a22 a23 0 0 a33  Then A − λI =  a11 − λ a12 a13 0 a22 − λ a23 0 0 a33 − λ  . and thus the characteristic polynomial of A is p(λ) = det( A − λI) = ( a11 − λ)( a22 − λ)( a33 − λ)and the roots of p(λ) are λ1 = a11 , λ 2 = a22 , λ 3 = a33 . In other words, the eigenvalues of A are simply the diagonal entries of A. Example 23.2. Consider the following matrix A =  6 0 0 0 0 −1 0 0 0 00 0
{ "page_id": null, "source": 6828, "title": "from dpo" }
7 0 0 −1 0 0 −4 08 −2 3 0 7  .Diagonalization (a) Find the characteristic polynomial and the eigenvalues of A.(b) Find the geometric and algebraic multiplicity of each eigenvalue of A.We now introduce a very special type of a triangular matrix, namely, a diagonal matrix. Definition 23.3: A matrix D whose off-diagonal entries are all zero is called a diagonal matrix. For example, here is 3 × 3 diagonal matrix D =  3 0 00 −5 00 0 −8  . and here is a 5 × 5 diagonal matrix D =  6 0 0 0 00 0 0 0 00 0 −7 > 2 0 00 0 0 2 00 0 0 0 − 1 > 11  . A diagonal matrix is clearly also a triangular matrix and therefore the eigenvalues of a diagonal matrix D are simply the diagonal entries of D. Moreover, the powers of a diagonal matrix are easy to compute. For example, if D = [λ1 00 λ2 ] then D2 = [λ1 00 λ2 ] [ λ1 00 λ2 ] = [λ21 00 λ22 ] and similarly for any integer k = 1 , 2, 3, . . . , we have that Dk = [λk > 1 00 λk > 2 ] . # 23.2 Diagonalization Recall that two matrices A and B are said to be similar if there exists an invertible matrix P such that A = PBP −1. A very simple type of matrix is a diagonal matrix since many computations with diagonal matrices are trivial. The problem of diagonalization is thus concerned with answering the question of whether a given matrix is similar to a diagonal matrix. Below is the formal definition. > 180 Lecture 23 Definition 23.4: A matrix A is
{ "page_id": null, "source": 6828, "title": "from dpo" }
called diagonalizable if it is similar to a diagonal matrix D. In other words, if there exists an invertible P such that A = PDP −1. How do we determine when a given matrix A is diagonalizable? Let us first determine what conditions need to be met for a matrix A to be diagonalizable. Suppose then that A is diag-onalizable. Then by Definition 23.4 , there exists an invertible matrix P = [v1 v2 · · · vn ] and a diagonal matrix D =  λ1 0 . . . 00 λ2 . . . 0... ... . . . ...0 0 . . . λn  such that A = PDP −1. Multiplying on the right both sides of the equation A = PDP −1 by the matrix P we obtain that AP = PD . Now AP = [Av 1 Av 2 · · · Av n ] while on the other hand PD = [λ1v1 λ2v2 · · · λnvn ] . Therefore, since it holds that AP = PD then [Av 1 Av 2 · · · Av n ] = [λ1v1 λ2v2 · · · λnvn ] . or if we compare columns we must have that Av i = λivi. Thus, the columns v1, v2, . . . , vn of P are eigenvectors of A and form a basis for Rn because P is invertible. In conclusion, if A is diagonalizable then Rn has a basis consisting of eigenvectors of A.Suppose instead that {v1, v2, . . . , vn} is a basis of Rn consisting of eigenvectors of A. Let λ1, λ 2, . . . , λ n be the eigenvalues of A associated to v1, v2, . . . , vn, respectively, and set P = [v1 v2 ·
{ "page_id": null, "source": 6828, "title": "from dpo" }
· · vn ] . Then P is invertible because {v1, v2, . . . , vn} are linearly independent. Let D =  λ1 0 . . . 00 λ2 . . . 0... ... . . . ...0 0 . . . λn  . > 181 Diagonalization Now, since Av i = λivi we have that AP = A [v1 v2 · · · vn ] = [Av 1 Av 2 · · · Av n ] = [λ1v1 λ2v2 · · · λnvn ] . Therefore, AP = [λ1v1 λ2v2 · · · λnvn ]. On the other hand, PD = [v1 v2 · · · vn ] λ1 0 . . . 00 λ2 . . . 0... ... . . . ...0 0 . . . λn  = [λ1v1 λ2v2 · · · λnvn ] . Therefore, AP = PD , and since P is invertible we have that A = PDP −1. Thus, if Rn has a basis of consisting of eigenvectors of A then A is diagonalizable. We have therefore proved the following theorem. Theorem 23.5: A matrix A is diagonalizable if and only if there is a basis {v1, v2, . . . , vn} of Rn consisting of eigenvectors of A.The punchline with Theorem 23.5 is that the problem of diagonalization of a matrix A is equivalent to finding a basis of Rn consisting of eigenvectors of A. We will see in some of the examples below that it is not always possible to diagonalize a matrix. # 23.3 Conditions for Diagonalization We first consider the simplest case when we conclude that a given matrix is diagonalizable, namely, the case when all eigenvalues are distinct. Theorem 23.6: Suppose that A ∈ Rn×n has n distinct eigenvalues λ1, λ 2,
{ "page_id": null, "source": 6828, "title": "from dpo" }
. . . , λ n. Then A is diagonalizable. Proof. Each eigenvalue λi produces an eigenvector vi. The set of eigenvectors {v1, v2, . . . , vn} are linearly independent because they correspond to distinct eigenvalues (Theorem 21.6 ). Therefore, {v1, v2, . . . , vn} is a basis of Rn consisting of eigenvectors of A and then by Theorem 23.5 we conclude that A is diagonalizable. What if A does not have distinct eigenvalues? Can A still be diagonalizable? The following theorem completely answers this question. > 182 Lecture 23 Theorem 23.7: A matrix A is diagonalizable if and only if the algebraic and geometric multiplicities of each eigenvalue are equal. Proof. Let A be a n × n matrix and let λ1, λ 2, . . . , λ p denote the distinct eigenvalues of A.Suppose that k1, k 2, . . . , k p are the algebraic multiplicities and g1, g 2, . . . , g p are the geometric multiplicities of the eigenvalues, respectively. Suppose that the algebraic and geometric multiplicities of each eigenvalue are equal, that is, suppose that gi = ki for each i = 1 , 2 . . . , p .Since k1 +k2 +· · · +kp = n, then because gi = ki we must also have that g1 +g2 +· · · +gp = n.Therefore, there exists n linearly eigenvectors of A and consequently A is diagonalizable. On the other hand, suppose that A is diagonalizable. Since the geometric multiplicity is at most the algebraic multiplicity, the only way that g1 + g2 + · · · + gp = n is if gi = ki, i.e., that the geometric and algebraic multiplicities are equal. Example 23.8. Determine if A is diagonalizable. If yes,
{ "page_id": null, "source": 6828, "title": "from dpo" }
find a matrix P that diagonalizes A. A =  −4 −6 −73 5 30 0 3  Solution. The characteristic polynomial of A is p(λ) = det( A − λI) = ( λ − 2)( λ − 3)( λ + 1) and therefore λ1 = 2, λ2 = 3, and λ3 = −1 are the eigenvalues of A. Since A has n =3 distinct eigenvalues, then by Theorem 23.6 A is diagonalizable. Eigenvectors v1, v2, v3 corresponding to λ1, λ 2, λ 3 are found to be v1 =  −110  , v2 =  −101  , v3 =  −210  Therefore, a matrix that diagonalizes A is P =  −1 −2 −21 0 10 1 0  You can verify that P  λ1 0 00 λ2 00 0 λ3  P−1 = A The following example demonstrates that it is possible for a matrix to be diagonalizable even though the matrix does not have distinct eigenvalues. > 183 Diagonalization Example 23.9. Determine if A is diagonalizable. If yes, find a matrix P that diagonalizes A. A =  2 0 04 2 2 −2 0 1  Solution. The characteristic polynomial of A is p(λ) = det( A − λI) = ( λ − 1)( λ − 2) 2 and therefore λ1 = 1, λ2 = 2. An eigenvector corresponding to λ1 = 1 is v1 =  0 −21  One finds that g2 = dim(Null( A − λ2I)) = 2, and two linearly independent eigenvectors for λ2 are {v2, v3} =  −102  ,  010  Therefore, A is diagonalizable, and a matrix that diagonalizes A is P = [v1 v2 v3 ] =  0 −1 0 −2 0 11 2 0  You can verify that P
{ "page_id": null, "source": 6828, "title": "from dpo" }
 λ1 0 00 λ2 00 0 λ3  P−1 = A Example 23.10. Determine if A is diagonalizable. If yes, find a matrix P that diagonalizes A. A =  2 4 3 −4 −6 −33 3 1  Solution. The characteristic polynomial of A is p(λ) = det( A − λI) = −λ3 − 3λ2 + 4 = −(λ − 1)( λ + 2) 2 and therefore the eigenvalues of A are λ1 = 1 and λ2 = −2. For λ2 = −2 one computes A − λ2I ∼  1 1 10 0 10 0 0  We see that the dimension of the eigenspace of λ2 = −2 is g2 = 1, which is less than the algebraic multiplicity k2 = 2. Therefore, from Theorem 23.7 we can conclude that it is not possible to construct a basis of eigenvectors of A, and therefore A is not diagonalizable. > 184 Lecture 23 Example 23.11. Suppose that A has eigenvector v with corresponding eigenvalue λ. Show that if A is invertible then v is an eigenvector of A−1 with corresponding eigenvalue 1 > λ . Example 23.12. Suppose that A and B are n × n matrices such that AB = BA . Show that if v is an eigenvector of A with corresponding eigenvalue then v is also an eigenvector of B with corresponding eigenvalue λ. After this lecture you should know the following: • Determine if a matrix is diagonalizable or not • Find the algebraic and geometric multiplicities of an eigenvalue • Apply the theorems introduced in this lecture > 185 Diagonalization > 186 Lecture 24 # Lecture 24 Diagonalization of Symmetric Matrices # 24.1 Symmetric Matrices Recall that a square matrix A is said to be symmetric if AT = A. As an
{ "page_id": null, "source": 6828, "title": "from dpo" }
example, here is a 3 × 3 symmetric matrix: A =  1 −3 7 −3 2 87 8 4  . Symmetric matrices are ubiquitous in mathematics. For example, let f (x1, x 2, . . . , x n) be a function having continuous second order partial derivatives. Then Clairaut’s Theorem from multivariable calculus says that ∂f ∂x i∂x j = ∂f ∂x j ∂x i . Therefore, the Hessian matrix of f is symmetric: Hess( f ) =  > ∂f > ∂x 1∂x 1 > ∂f > ∂x 1∂x 2 · · · ∂f > ∂x 1∂x n > ∂f > ∂x 2∂x 1 > ∂f > ∂x 2∂x 2 · · · ∂f > ∂x 2∂x n ... ... . . . ... > ∂f > ∂x n∂x 1 > ∂f > ∂x n∂x 2 · · · ∂f > ∂x n∂x n  . The Second Derivative Test of multivariable calculus then says that if P = ( a1, a 2, . . . , a n)is a critical point of f , that is ∂f ∂x 1 (P ) = ∂f ∂x 2 (P ) = · · · = ∂f ∂x n (P ) = 0 then (i) P is a local minimum point of f if the matrix Hess( f ) has all positive eigenvalues, (ii) P is a local maximum point of f if the matrix Hess( f ) has all negative eigenvalues, and 187 Diagonalization of Symmetric Matrices (iii) P is a saddle point of f if the matrix Hess( f ) has negative and positive eigenvalues. In general, the eigenvalues of a matrix with real entries can be complex numbers. For example, the matrix A = [0 −11 0 ] has characteristic polynomial p(λ) = λ2 + 1
{ "page_id": null, "source": 6828, "title": "from dpo" }
the roots of which are clearly λ1 = √−1 = i and λ2 = −√−1 = −i. Thus, in general, a matrix whose entries are all real numbers may have complex eigenvalues. However, for symmetric matrices we have the following. Theorem 24.1: If A is a symmetric matrix then all of its eigenvalues are real numbers. The proof is easy but we will omit it. # 24.2 Eigenvectors of Symmetric Matrices We proved earlier that if {v1, v2, . . . , vk} are eigenvectors of a matrix A corresponding to distinct eigenvalues λ1, λ 2, . . . , λ k then the set {v1, v2, . . . , vk} is linearly independent (The-orem 21.6 ). For symmetric matrices we can say even more as the next theorem states. Theorem 24.2: Let A be a symmetric matrix. If v1 and v2 are eigenvectors of A corresponding to distinct eigenvalues then v1 and v2 are orthogonal, that is, v1 • v2 = 0. Proof. Recall that v1 • v2 = vT > 1 v2. Let λ1 6 = λ2 be the eigenvalues associated to v1 and v2.Then λ1vT > 1 v2 = ( λ1v1)T v2 = ( Av 1)T v2 = vT > 1 AT v2 = vT > 1 Av 2 = vT > 1 (λ2v2)= λ2vT > 1 v2. Therefore, λ1vT > 1 v2 = λ2vT > 1 v2 which implies that ( λ1 − λ2)vT > 1 v2 = 0. But since ( λ1 − λ2) 6 = 0 then we must have vT > 1 v2 = 0, that is, v1 and v2 are orthogonal. # 24.3 Symmetric Matrices are Diagonalizable As we have seen, the main criteria for diagonalization is that for each eigenvalue the geometric and algebraic multiplicities are equal; not all matrices
{ "page_id": null, "source": 6828, "title": "from dpo" }
satisfy this condition and thus not > 188 Lecture 24 all matrices are diagonalizable. As it turns out, any symmetric A is diagonalizable and moreover (and perhaps more importantly) there exists an orthogonal eigenvector matrix P that diagonalizes A. The full statement is below. Theorem 24.3: If A is a symmetric matrix then A is diagonalizable. In fact, there is an orthonormal basis of Rn of eigenvectors {v1, v2, . . . , vn} of A. In other words, the matrix P = [ v1 v2 · · · vn] is orthogonal, PT P = I, and A = PDP T .The proof of the theorem is not hard but we will omit it. The punchline of Theorem 24.3 is that, for the case of a symmetric matrix, we will never encounter the situation where the geometric multiplicity is strictly less than the algebraic multiplicity. Moreover, we are guaranteed to find an orthogonal matrix that diagonalizes a given symmetric matrix. Example 24.4. Find an orthogonal matrix P that diagonalizes the symmetric matrix A =  1 0 −10 1 1 −1 1 2  . Solution. The characteristic polynomial of A is p(λ) = det( A − λI) = λ3 − 4λ2 + 3 λ = λ(λ − 1)( λ − 3) The eigenvalues of A are λ1 = 0, λ2 = 1 and λ3 = 3. Eigenvectors of A associated to λ1, λ 2, λ 3 are u1 =  1 −11  , u2 =  110  , u3 =  −112  . As expected by Theorem 24.2 , the eigenvectors u1, u2, u3 form an orthogonal set: uT > 1 u2 = 0 , uT > 1 u3 = 0 , uT > 2 u3 = 0 . To find an orthogonal matrix P that
{ "page_id": null, "source": 6828, "title": "from dpo" }
diagonalizes A we must normalize the eigenvectors u1, u2, u3 to obtain an orthonormal basis {v1, v2, v3}. To that end, first compute uT > 1 u1 = 3, uT > 2 u2 = 2, and uT > 3 u3 = 6. Then let v1 = 1√3 u1, let v2 = 1√2 u2, and let v3 = 1√6 u3. Therefore, an orthogonal matrix that diagonalizes A is P = [v1 v2 v3 ] =  > 1√31√2 − 1√6 − 1√31√21√61√3 0 2√6  You can easily verify that PT P = I, and that A = P  0 0 00 1 00 0 3  PT > 189 Diagonalization of Symmetric Matrices Example 24.5. Let A and B be n × n matrices. Show that if A is symmetric then the matrix C = BAB T is also a symmetric matrix. After this lecture you should know the following: • a symmetric matrix is diagonalizable with an orthonormal set of eigenvectors > 190 # Lecture 25 The PageRank Algortihm In this lecture, we will see how linear algebra is used in Google’s webpage ranking algorithm used in everyday Google searches. # 25.1 Search Engine Retrieval Process Search engines perform a two-stage process to retrieve search results 1. In Stage 1, traditional text processing is used to find all relevant pages (e.g. keywords in title, body) and produces a content score . After Stage 1 , there is a large amount of relevant pages. For example, the query “ symmetric matrix ” results in about 3,830,000 pages (03/31/15). Or “ homework help ” results in 49,400,000 pages (03/31/15). How should the relevant pages be displayed? In Stage 2, the pages are sorted and displayed based on a pre-computed ranking that is query-independent , this is the popularity score .
{ "page_id": null, "source": 6828, "title": "from dpo" }
The ranking is based on the hyperlinked or networked structure of the web, and the ranking is based on a popularity contest; if many pages link to page Pi then Pi must be an important page and should therefore have a high popularity score. In January 1998, John Kleinberg from IBM (now a CS professor at Cornell) presented the HITS algorithm 2 (e.g., www.teoma.com ). At Stanford, doctoral students Sergey Brin and Larry Page were busy working on a similar project which they had begun in 1995. Below is the abstract of their paper 3:“In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at .” > 1 A.N. Langville and C.D. Meyer, Google’s PageRank and Beyond , Princeton University Press, 2006 > 2 J. Kleinberg, Authoritative sources in a hyperlinked environment , Journal of ACM, 46, 1999, 9th ACM-SIAM Symposium on Discrete Algorithms > 3 S. Brin and L. Page, The anatomy of a large-scale hypertextual Web search engine , Computer Networks and ISDN Systems, 33:107-117, 1998 The PageRank Algortihm In both models, the web is defined as a directed graph , where the nodes represent webpages and the directed arcs represent hyperlinks, see Figure 25.1 . > 1324 Figure 25.1: A tiny web represented as a directed graph. # 25.2 A Description of the PageRank Algorithm In the PageRank algorithm, each inlink is viewed as a recommendation (or vote). In general, pages with many inlinks are more important than pages with few inlinks. However, the quality of
{ "page_id": null, "source": 6828, "title": "from dpo" }
the inlink (vote) is important. The vote of each page should be divided by the total number of recommendations made by the page. The PageRank of page i, denoted xi,is the sum of all the weighted PageRanks of all the pages pointing to i: xi = ∑ > j→i xj |Nj | where (1) Nj is the number of outlinks from page j (2) j → i means page j links to page i Example 25.1. Find the PageRank of each page for the network in Figure 25.1 .From the previous example, we see that the PageRank of each page can be found by solving an eigenvalue/eigenvector problem. However, when dealing with large networks such as the internet, the size of the problem is in the billions (8.1 billion in 2006) and directly solving the equations is not possible. Instead, an iterative method called the power method is used. One starts with an initial guess, say x0 = ( 1 > 4 , 1 > 4 , 1 > 4 , 1 > 4 ). Then one updates the guess by computing x1 = Hx 0. In other words, we have a discrete dynamical system xk+1 = Hx k. A natural question is under what conditions will the the limiting value of the sequence lim > k→∞ xk = lim > k→∞ (Hkx0) = q > 192 Lecture 25 converge to an equilibrium of H? Also, if lim > k→∞ xk exists, will it be a positive vector? And lastly, can x0 6 = 0 be chosen arbitrarily? To see what situations may occur, consider the network displayed in Figure 25.2 . Starting with x0 = ( 1 > 5 , . . . , 1 > 5 ) we obtain that for k ≥ 39, the vectors xk =
{ "page_id": null, "source": 6828, "title": "from dpo" }
Hkx0 cycle between (0 , 0, 0, 0.28 , 0.40) and (0 , 0, 0, 0.40 , 0.28). Therefore, the sequence x0, x1, x2, . . . does not converge. The reason for this is that nodes 4 and 5 form a cycle . ## 1 ## 34 52 H =  0 1 > 3 0 0 0 0 0 1 > 2 0 0 0 1 > 3 0 0 0 0 1 > 31 > 2 0 1 0 0 0 1 0  Figure 25.2: Cycles present in the network Now consider the network displayed in Figure 25.3 . If we remove the cycle we are still left with a dangling node , namely node 1 (e.g. pdf file, image file). Starting with x0 =(1 > 5 , . . . , 1 > 5 ) results in lim > k→∞ xk = 0. Therefore, in this case the sequence x0, x1, x2, . . . converges to a non-positive vector, which for the purposes of ranking pages would be an undesirable situation. ## 1 ## 34 52 H =  0 1 > 3 0 0 0 0 0 1 > 21 > 2 00 1 > 3 0 0 0 0 1 > 31 > 2 0 1 0 0 0 1 > 2 0  Figure 25.3: Dangling node present in the network To avoid the presence of dangling nodes and cycles, Brin and Page used the notion of a random surfer to adjust H. To deal with a dangling node, Brin and Page replaced the associated zero-column with the vector 1 > n 1 = ( 1 > n , 1 > n , . . . , 1 > n ). The justification for this adjustment is that if a random
{ "page_id": null, "source": 6828, "title": "from dpo" }
surfer reaches a dangling node, the surfer will “teleport” to any page in the web with equal probability. The new updated hyperlink matrix H∗ may still not have the desired properties. To deal with cycles, a surfer may abandon the hyperlink structure of the web by ocassionally moving to a random page by typing its address in the > 193 The PageRank Algortihm browser. With these adjustments, a random surfer now spends only a proportion of his time using the hyperlink structure of the web to visit pages. Hence, let 0 n J. The matrix G goes by the name of the Google matrix, and it is reported that Google uses α = 0 .85 (here J is the all ones matrix). The Google matrix G is now a primitive and stochastic matrix. Stochastic means that all its columns are probability vectors, i.e., non-negative vectors whose components sum to 1. Primitive means that there exists k ≥ 1 such that Gk has all positive entries ( k = 1 in our case). With these definitions, we now have the following theorem. Theorem 25.2: If G is a primitive stochastic matrix then: (i) There is a stochastic G∗ such that lim k→∞ Gk = G∗. (ii) G∗ = [q q · · · q] where q is a probability vector. (iii) For any probability vector q0 we have lim k→∞ Gkq0 = q. (iv) The vector q is the unique probability vector which is an eigenvector of G with eigenvalue λ1 = 1. (v) All other eigenvalues λ2, . . . , λ n have |λj | < 1. Proof. We
{ "page_id": null, "source": 6828, "title": "from dpo" }
will prove a special case 4. Assume for simplicity that G is positive (this is the case of the Google Matrix). If x = Gx , and x has mixed signs, then |xi| = ∣∣∣∣∣ > n ∑ > j=1 Gij xj ∣∣∣∣∣ n ∑ > j=1 Gij |xj |. Then n∑ > i=1 |xi| n ∑ > i=1 > n ∑ > j=1 Gij |xj | = ∑ > j=1 |xj | which is a contradiction. Therefore, all the eigenvectors in the λ1 = 1 eigenspace are either negative or positive. One then shows that the eigenspace corresponding to λ1 = 1 is 1-dimensional. This proves that there is a unique probability vector q such that q = Gq . > 4K. Bryan, T. Leise, The $25,000,000,000 Eigenvector: The Linear Algebra Behind Google , SIAM Review, 48(3), 569-581 194 Lecture 25 Let λ1, λ 2, . . . , λ n be the eigenvalues of G. We know that λ1 = 1 is a dominant eigenvalue: |λ1| > |λj |, j = 2 , 3, . . . , n. Let q0 be a probability vector and let q be as above, and let v2, . . . , vn be the remaining eigenvectors of G. Then q0 = q + c2v2 + · · · + cnvn and therefore Gkq0 = Gk(q + c2v2 + · · · + cnvn)= Gkq + c2Gkv2 + · · · + cnGkvn = q + c2λk > 2 v2 + · · · + cnλknvn. From this we see that lim > k→∞ Gkq0 = q. # 25.3 Computation of the PageRank Vector The Google matrix G is completely dense, which is computationally undesirable. Fortu-nately, G = αH∗ + (1 − α) 1 > n ee T
{ "page_id": null, "source": 6828, "title": "from dpo" }
= α(H + 1 > n 11 T ) + (1 − α) 1 > n 11 T = αH + ( αa + (1 − α)1) 1 > n 1T and H is very sparse and requires minimal storage. A vector-matrix multiplication generally requires O(n2) computation ( n ≈ 8, 000 , 000 , 000 in 2006). Estimates show that the average webpage has about 10 outlinks, so H has about 10 n non-zero entries. This means that multiplication with H reduces to O(n) computation. Aside from being very simple, the power method is a matrix-free method, i.e., no manipulation of the matrix H is done. Brin and Page, and others, have confirmed that only 50-100 iterations are needed for a satisfactory approximation of the PageRank vector q for the web. After this lecture you should know the following: • Setup a Google matrix and compute PageRank vector > 195 The PageRank Algortihm > 196 Lecture 26 # Lecture 26 Discrete Dynamical Systems # 26.1 Discrete Dynamical Systems Many interesting problems in engineering, science, and mathematics can be studied within the framework of discrete dynamical systems. Dynamical systems are used to model systems that change over time. The state of the system (economic, ecologic, engineering, etc.) is measured at discrete time intervals producing a sequence of vectors x0, x1, x2, . . . . The relationship between the vector xk and the next vector xk+1 is what constitutes a model . Definition 26.1: A linear discrete dynamical system on Rn is an infinite sequence {x0, x1, x2, . . . } of vectors in Rn and a matrix A such that xk+1 = Ax k. The vectors xk are called the state of the dynamical system and x0 is the initial condition of the system. Once the initial condition
{ "page_id": null, "source": 6828, "title": "from dpo" }
x0 is fixed, the remaining state vectors x1, x2, . . . , can be found by iterating the equation xk+1 = Ax k. # 26.2 Population Model Consider the dynamic system consisting of the population movement between a city and its suburbs. Let x ∈ R2 be the state population vector whose first component is the population of the city and the second component is the population of the suburbs: x = [cs ] . For simplicity, we assume that c + s = 1, i.e., c and s are population percentages of the total population. Suppose that in the year 1900, the city population was c0 and the suburban population was s0. Suppose it is known that after each year 5% of the city’s population > 197 Discrete Dynamical Systems moves to the suburbs and that 3% of the suburban population moves to the city. Hence, the population in the city in year 1901 is c1 = 0 .95 c0 + 0 .03 s0, while the population in the suburbs in year 1901 is s1 = 0 .05 c0 + 0 .97 s0. The equations c1 = 0 .95 c0 + 0 .03 s0 s1 = 0 .05 c0 + 0 .97 s0 can be written in matrix form as [c1 s1 ] = [0.95 0.03 0.05 0.97 ] [ c0 s0 ] . Performing the same analysis for the next year, the population in 1902 is [c2 s2 ] = [0.95 0.03 0.05 0.97 ] [ c1 s1 ] . Hence, the population movement is a linear dynamical system with matrix and state vector A = [0.95 0.03 0.05 0.97 ] , xk = [ck sk ] . Suppose that the initial population state vector is x0 = [0.70 0.30 ] . Then, x1 = Ax 0
{ "page_id": null, "source": 6828, "title": "from dpo" }
= [0.95 0.03 0.05 0.97 ] [ 0.70 0.30 ] = [0.674 0.326 ] . Then, x2 = Ax 1 = [0.95 0.03 0.05 0.97 ] [ 0.674 0.326 ] = [0.650 0.349 ] . In a similar fashion, one can compute that up to 3 decimal places: x500 = [0.375 0.625 ] , x1000 = [0.375 0.625 ] . It seems as though the population distribution converges to a steady state or equilibrium. We predict that in the year 2400, 38% of the total population will live in the city and 62% in the suburbs. Our computations in the population model indicate that the population distribution is reaching a sort of steady state or equilibrium, which we now define. > 198 Lecture 26 Definition 26.2: Let xk+1 = Ax k be a discrete dynamical system. An equilibrium state for A is a vector q such that Aq = q.Hence, if q is an equilibrium for A and the initial condition is x0 = q then x1 = Ax 0 = x0,and x2 = Ax 1 = x0, and iteratively we have that xk = x0 = q for all k. Thus, if the system starts at the equilibrium q then it remains at q for all time. How do we find equilibrium states? If q is an equilibrium for A then from Aq = q we have that Aq − q = 0 and therefore (A − I)q = 0. Therefore, q is an equilibrium for A if and only if q is in the nullspace of the matrix A − I: q ∈ Null( A − I). Example 26.3. Find the equilibrium states of the matrix from the population model A = [0.95 0.03 0.05 0.97 ] . Does the initial condition of the population x0 change the
{ "page_id": null, "source": 6828, "title": "from dpo" }
long term behavior of the discrete dynamical system? We will know the answer once we perform an eigenvalue analysis on A (Lecture 22). As a preview, we will use the fact that xk = Akx0 and then write x0 in an appropriate basis that reveals how A acts on x0. To see how the last equation was obtained, notice that x1 = Ax 0 and therefore x2 = Ax 1 = A(Ax 0) = A2x0 and therefore x3 = Ax 2 = A(A2x0) = A3x0 etc. # 26.3 Stability of Discrete Dynamical Systems We first formally define the notion of stability of a discrete dynamical system. > 199 Discrete Dynamical Systems Definition 26.4: Consider the discrete dynamical system xk+1 = Ax k where A ∈ Rn×n.The origin 0 ∈ Rn is said to be asymptotically stable if for any initial condition x0 ∈ Rn of the dynamical system we have lim > k→∞ xk = lim > k→∞ Akx0 = 0. The following theorem characterizes when a discrete linear dynamical system is asymptoti-cally stable. Theorem 26.5: Let λ1, . . . , λ n be the eigenvalues of A. If |λj | < 1 for all j = 1 , 2, . . . , n then the origin 0 is asymptotically stable for xk+1 = Ax k. Solution. For simplicity, we suppose that A is diagonalizable. Let {v1, . . . , vn} be a basis of eigenvectors of A with eigenvalues λ1, . . . , λ n respectively. Then, for any vector x0 ∈ Rn,there exists constants c1, . . . , c n such that x0 = c1v1 + · · · + cnvn. Now, for any integer k ≥ 1 we have that. Akvi = λki vi Then xk = Akx0 = Ak(c1v1 + ·
{ "page_id": null, "source": 6828, "title": "from dpo" }
· · + cnvn)= c1Akv1 + · · · + cnAkvn = c1λk > 1 v1 + · · · + cnλknvn. Since |λi| k→∞ λki = 0. Therefore, lim > k→∞ xk = lim > k→∞ (c1λk > 1 v1 + · · · + cnλknvn)= c1 ( lim > k→∞ λk > 1 ) v1 + · · · + cn ( lim > k→∞ λkn ) vn = 0 v1 + · · · + 0 vn = 0. This completes the proof. > 200 Lecture 26 As an example of an asymptotically stable dynamical system, consider the 2D system xk+1 = [ 1.1 −0.40.15 0.6 ] x. The eigenvalues of A = [ 1.1 −0.40.15 0.6 ] are λ1 = 0 .8 and λ2 = 0 .9. Hence, by Theorem 26.5 ,for any initial condition x0, the sequence {x0, x1, x2, . . . , } converges to the origin in R2. In Figure 26.1 , we plot four different state sequences {x0, x1, x2, . . . , } corresponding to the four distinct initial conditions x0 = [37 ] , x0 = [ 3 −7 ] , x0 = [−37 ] , and x0 = [−3 −7 ] . As expected, all trajectories converge to the origin. Figure 26.1: A 2D asymptotically stable linear system After this lecture you should know the following: • what a dynamical system is • and how to find its equilibrium states • how to determine if a discrete dynamical system has the origin as an asymptotically stable equilibrium
{ "page_id": null, "source": 6828, "title": "from dpo" }
Title: URL Source: Markdown Content: # CSE 311 Quiz Section: December 6, 2012 (Solutions) 1 Determining Countability Determine whether each of these sets is finite, countably infinite, or uncountable. For those that are countably infinite, exhibit a one-to-one correspondence between the set of positive integers and that set. a) the integers that are multiples of 7 Answer: Countably infinite; listing: 0, 7, -7, 14, -14, 21, -21, 28, -28, ... b) the integers less than 100 Answer: Countably infinite; listing: 99, 98, 97, ... , 0, -1, -2, -3, .... c) the real numbers between 0 and 12 Answer: Uncountable (similar to diagonalization proof for real numbers between 0 and 1) d) the real numbers not containing 0 in their decimal representations Answer: Uncountable e) all bit strings not containing the bit 0 Answer: Countably infinite; listing: λ, 1, 11 , 111 , 1111 , ... f) all positive rational numbers that cannot be written with denominators less than 4 Answer: Countably infinite; use ’dovetailing’ argument similar to positive rationals except omit fractions with denominators less than four and those that reduce to fractions with a denominator less than four (as well as repeats) > 12 # 2 Sets and Countability a) Show that if A and B are sets, A is uncountable, and A ⊆ B, then B is uncountable. Answer: Assume B is countable. Then the elements of B can be listed b1, b 2, b 3, ... Because A is a subset of B, taking the subsequence of {bn} that contains the terms that are in A gives a listing of elements of A. But we assumed A is uncountable, therefore we have reached a contradiction. Hence B is uncountable. b) If A is an uncountable set and B is a countable set, must A −
{ "page_id": null, "source": 6829, "title": "from dpo" }
B be uncountable? Answer: Assume A − B is countable. Then, since A = ( A − B) ∪ (A ∩ B), and A ∩ B is countably infinite because B is countable, the elements of A can be listed in a sequence by alternating elements of A − B and the elements of A ∩ B (because they are both countably infinite, then a listing exists for each A − B and A ∩ B). However, finding a listing means that A is countable, but we assumed A was uncountable. Therefore, yes, A − B must be uncountable.
{ "page_id": null, "source": 6829, "title": "from dpo" }
continuing, you agree to ourUser Agreement * [Amazing]( * [Animals & Pets]( * [Cringe & Facepalm]( * [Funny]( * [Interesting]( * [Memes]( * [Oddly Satisfying]( * [Reddit Meta]( * [Wholesome & Heartwarming]( * Games * [Action Games]( * [Adventure Games]( * [Esports]( * [Gaming Consoles & Gear]( * [Gaming News & Discussion]( * [Mobile Games]( * [Other Games]( * [Role-Playing Games]( * [Simulation Games]( * [Sports & Racing Games]( * [Strategy Games]( * [Tabletop Games]( * Q&As * [Q&As]( * [Stories & Confessions]( * Technology * [3D Printing]( * [Artificial Intelligence & Machine Learning]( * [Computers & Hardware]( * [Consumer Electronics]( * [DIY Electronics]( * [Programming]( * [Software & Apps]( * [Streaming Services]( * [Tech News & Discussion]( * [Virtual & Augmented Reality]( * Pop Culture * [Celebrities]( * [Creators & Influencers]( * [Generations & Nostalgia]( * [Podcasts]( * [Streamers]( * [Tarot & Astrology]( * Movies & TV * [Action Movies & Series]( * [Animated Movies & Series]( * [Comedy Movies & Series]( * [Crime, Mystery, & Thriller Movies & Series]( * [Documentary Movies & Series]( * [Drama Movies & Series]( * [Fantasy Movies & Series]( * [Horror Movies & Series]( * [Movie News & Discussion]( * [Reality TV]( * [Romance Movies & Series]( * [Sci-Fi Movies & Series]( * [Superhero Movies & Series]( * [TV News & Discussion]( * * * *
{ "page_id": null, "source": 6830, "title": "from dpo" }
RESOURCES * [About Reddit]( * [Advertise]( * [Reddit Pro BETA]( * [Help]( * [Blog]( * [Careers]( * [Press]( * * * * [Communities]( * [Best of Reddit]( * [Topics]( ![Image 4](
{ "page_id": null, "source": 6830, "title": "from dpo" }
Title: Are there more numbers between 1 and 5 than between 1 and 2? If yes, how? Aren't both infinity? : r/askscience URL Source: Markdown Content: Are there more numbers between 1 and 5 than between 1 and 2? If yes, how? Aren't both infinity? : r/askscience =============== [Skip to main content]( there more numbers between 1 and 5 than between 1 and 2? If yes, how? Aren't both infinity? : r/askscience Open menu Open navigation[]( to Reddit Home r/askscience A chip A close button Get App Get the Reddit app [Log In]( in to Reddit Expand user menu Open settings menu [![Image 1: r/askscience icon]( Go to askscience]( [r/askscience]( ![Image 2: A banner for the subreddit]( ![Image 3: r/askscience icon]( Ask a science question, get a science answer. * * * 26M Members Online •10 yr. ago [sunbir]( Are there more numbers between 1 and 5 than between 1 and 2? If yes, how? Aren't both infinity? =============================================================================================== [Mathematics]( Edit: wow! This blew up! I'm a fairly new Reddit user. Reddit is so amazing! I'll try to read as many answers as I can! Read more Archived post. New comments cannot be posted and votes cannot be cast. Share Share [![Image 4: u/monday_com avatar]( The numbers don't lie - there's a reason why 10,000+ customers rate monday.com 5 stars and use it as their work management platform. It’s the #1 platform to efficiently manage your team, work, and processes. Try it now! Sign Up monday.com ![Image 5: Thumbnail image: The numbers don't lie - there's a reason why 10,000+ customers rate monday.com 5 stars and use it as their work management platform. It’s the #1 platform to efficiently manage your team, work, and processes. Try it now!]( Sort by: Best Open comment sort options * Best * Top
{ "page_id": null, "source": 6830, "title": "from dpo" }
* New * Controversial * Old * Q&A []( freemath/4). There are indeed an infinite amount of numbers between 1 and 5, but not every infinite set has the same amount of elements: there are, for example, more real numbers between 1 and 2 than there are integers, even though there are an infinite amount of both. The proof of this is in my opinion very elegant and fairly easy to comprehend and is called 'Cantor's diagonalisation argument' Reply reply } Share Share 139 more replies 139 more replies More replies and (1,5) are both uncountably infinite and therefore DO share the same cardinality (and therefore in some ways, the same 'amount' of numbers'), while clearly they do not share the same 'length'. The interesting thing about measure is that while you can assign length, volume, etc... (its far more general than this, but
{ "page_id": null, "source": 6830, "title": "from dpo" }
for the sake of the post...) to 'usual' sets (intervals, boxes and so on), you can also calculate the measure of VERY bizarre sets, like the Cantor set (a kind of infinitely fine patchwork of points between 0 and 1) So to summarize, while (1,5) and (2,5) do share the same cardinality as has been explained, they do not share the same measure (namely Jordan or Lebesgue), so it's the concept of measure that captures the behavior you have noticed. Reply reply } Share Share 12 more replies 12 more replies [More replies*4)+1 and get exactly one corresponding number between 1 and 5; and for any possible number Y between 1 and 5, you can take ((Y-1)/4)+1 and get exactly one corresponding number between 1 and 2. Reply reply } Share Share 20 more replies 20 more replies [More replies]( []( [jedi-son]( •[10y ago]( The cardinality is the same. To be specific, they are both are uncountable sets. That being said there exists a one to one mapping between the two. Uncountable means that there does not exist a way to map all the numbers between 1 and 2 to the integers. IE you can't "count" them even with an infinitely long list. So in short, they are the same size. Reply reply } Share Share [![Image 8: u/jmt222 avatar]( [jmt222]( •[10y ago]( It depends on how you define "more". Infinity does complicate things. The most common way of comparing infinite sets is to say that two sets have the
{ "page_id": null, "source": 6830, "title": "from dpo" }
same size if there exists a bijection between them. In this case, f(x)=4x-3 is a bijection (one-to-one and onto) between [1,2] and [1,5], i.e. any number a in [1,2] corresponds with a number b in [1,5] and vice versa through f, i.e f(a)=b and you cannot replace just a or b in the equation with any other number. Another way of comparing the two sets [1,2] and [1,5] is to see that [1,2] is a proper subset of [1,5], i.e. everything in [1,2] can be found in [1,5] but not everything in [1,5] can be found in [1,2]. In this sense, [1,5] is bigger than [1,2], but this way of relating sets has problems. What if we replaced [1,2] with [0,2]? Now we can't even talk about comparisons since neither set is a subset of the other. Another way of comparing sets of real numbers is something called Lebesgue measure which is one way of quantifying how much stuff is in a set. It is difficult to define Lebesgue measure in layman's terms but an imprecise definition is to say that the Lebesgue measure of a set is the sum of the lengths of intervals that "covers" the set the best. The best way to cover [1,2] with an interval is to just cover it with [1,2] (again, there are more technical details to consider but this is "close"). The length of [1,2] is 1. The length of [1,5] is 4 so in terms of Lebesgue measure, [1,5] has more in it than [1,2]. However, we do run into problems with Lebesgue measure since, for example (0,infinity) has infinite Lebesgue measure and (-infinity,infinity) does as well even though one looks like it should be bigger than the other. To summarize, comparing set sizes is tricky when you have infinite sets.
{ "page_id": null, "source": 6830, "title": "from dpo" }