text
stringlengths 2
132k
| source
dict |
|---|---|
. . . . . . . . . . . . . . 71 9 Matrix Algebra 75 9.1 Sums of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 9.2 Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 9.3 Matrix Transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 10 Invertible Matrices 83 10.1 Inverse of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 10.2 Computing the Inverse of a Matrix . . . . . . . . . . . . . . . . . . . . . . . 85 10.3 Invertible Linear Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 11 Determinants 89 11.1 Determinants of 2 × 2 and 3 × 3 Matrices . . . . . . . . . . . . . . . . . . . . 89 11.2 Determinants of n × n Matrices . . . . . . . . . . . . . . . . . . . . . . .
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
. . 93 11.3 Triangular Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 12 Properties of the Determinant 97 12.1 ERO and Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 12.2 Determinants and Invertibility of Matrices . . . . . . . . . . . . . . . . . . . 100 12.3 Properties of the Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . 100 13 Applications of the Determinant 103 13.1 The Cofactor Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 13.2 Cramer’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 13.3 Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 14 Vector Spaces 109 14.1 Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
. . 109 14.2 Subspaces of Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 15 Linear Maps 117 15.1 Linear Maps on Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . 117 15.2 Null space and Column space . . . . . . . . . . . . . . . . . . . . . . . . . . 121 16 Linear Independence, Bases, and Dimension 125 16.1 Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 16.2 Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 16.3 Dimension of a Vector Space . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 17 The Rank Theorem 133 17.1 The Rank of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 > 4 Lecture 0 18 Coordinate Systems 137 18.1 Coordinates . . . . . . . . . . . . . . . . . . . . . .
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
. . . . . . . . . . . . . . 137 18.2 Coordinate Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 18.3 Matrix Representation of a Linear Map . . . . . . . . . . . . . . . . . . . . . 142 19 Change of Basis 147 19.1 Review of Coordinate Mappings on Rn . . . . . . . . . . . . . . . . . . . . . 147 19.2 Change of Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 20 Inner Products and Orthogonality 153 20.1 Inner Product on Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 20.2 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 20.3 Coordinates in an Orthonormal Basis . . . . . . . . . . . . . . . . . . . . . . 158 21 Eigenvalues and Eigenvectors 163 21.1 Eigenvectors and Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . 163 21.2 When
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
λ = 0 is an eigenvalue . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 22 The Characteristic Polynomial 169 22.1 The Characteristic Polynomial of a Matrix . . . . . . . . . . . . . . . . . . . 169 22.2 Eigenvalues and Similarity Transformations . . . . . . . . . . . . . . . . . . 176 23 Diagonalization 179 23.1 Eigenvalues of Triangular Matrices . . . . . . . . . . . . . . . . . . . . . . . 179 23.2 Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 23.3 Conditions for Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . 182 24 Diagonalization of Symmetric Matrices 187 24.1 Symmetric Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 24.2 Eigenvectors of Symmetric Matrices . . . . . . . . . . . . . . . . . . . . . . . 188 24.3 Symmetric Matrices are Diagonalizable . . . . . . . . . . . . . . . . . . . . . 188 25 The PageRank Algortihm 191 25.1 Search Engine Retrieval Process . .
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
. . . . . . . . . . . . . . . . . . . . . . . 191 25.2 A Description of the PageRank Algorithm . . . . . . . . . . . . . . . . . . . 192 25.3 Computation of the PageRank Vector . . . . . . . . . . . . . . . . . . . . . . 195 26 Discrete Dynamical Systems 197 26.1 Discrete Dynamical Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 26.2 Population Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 26.3 Stability of Discrete Dynamical Systems . . . . . . . . . . . . . . . . . . . . 199 > 5 Lecture 1 # Lecture 1 Systems of Linear Equations In this lecture, we will introduce linear systems and the method of row reduction to solve them. We will introduce matrices as a convenient structure to represent and solve linear systems. Lastly, we will discuss geometric interpretations of the solution set of a linear system in 2- and 3-dimensions. # 1.1 What is a system of linear equations? Definition 1.1: A system of m linear equations in n unknown variables x1, x 2, . . . , x n is a collection of m equations of the form a11 x1 + a12 x2 + a13 x3 + · · · + a1nxn = b1 a21 x1 + a22
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
x2 + a23 x3 + · · · + a2nxn = b2 a31 x1 + a32 x2 + a33 x3 + · · · + a3nxn = b3 ... ... ... ... ... am1x1 + am2x2 + am3x3 + · · · + amn xn = bm (1.1) The numbers aij are called the coefficients of the linear system; because there are m equa-tions and n unknown variables there are thefore m × n coefficients. The main problem with a linear system is of course to solve it: Problem: Find a list of n numbers ( s1, s 2, . . . , s n) that satisfy the system of linear equa-tions ( 1.1 ). In other words, if we substitute the list of numbers ( s1, s 2, . . . , s n) for the unknown variables ( x1, x 2, . . . , x n) in equation ( 1.1 ) then the left-hand side of the ith equation will equal bi. We call such a list ( s1, s 2, . . . , s n) a solution to the system of equations. Notice that we say “a solution” because there may be more than one. The set of all solutions to a linear system is called its solution set . As an example of a linear system, below is a linear 1Systems of Linear Equations system consisting of m = 2 equations and n = 3 unknowns: x1 − 5x2 − 7x3 = 0 5x2 + 11 x3 = 1 Here is a linear system consisting of m = 3 equations and n = 2 unknowns: −5x1 + x2 = −1 πx 1 − 5x2 = 0 63 x1 − √2x2 = −7And finally, below is a linear system consisting of m = 4 equations
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
and n = 6 unknowns: −5x1 + x3 − 44 x4 − 55 x6 = −1 πx 1 − 5x2 − x3 + 4 x4 − 5x5 + √5x6 = 0 63 x1 − √2x2 − 1 > 5 x3 + ln(3) x4 + 4 x5 − 1 > 33 x6 = 0 63 x1 − √2x2 − 1 5x3 − 1 > 8 x4 − 5x6 = 5 Example 1.2. Verify that (1 , 2, −4) is a solution to the system of equations 2x1 + 2 x2 + x3 = 2 x1 + 3 x2 − x3 = 11 . Is (1 , −1, 2) a solution to the system? Solution. The number of equations is m = 2 and the number of unknowns is n = 3. There are m × n = 6 coefficients: a11 = 2, a12 = 1, a13 = 1, a21 = 1, a22 = 3, and a23 = −1. And b1 = 0 and b2 = 11. The list of numbers (1 , 2, −4) is a solution because 2 · (1) + 2(2) + ( −4) = 2 (1) + 3 · (2) − (−4) = 11 On the other hand, for (1 , −1, 2) we have that 2(1) + 2( −1) + (2) = 2 but 1 + 3( −1) − 2 = −4 6 = 11 . Thus, (1 , −1, 2) is not a solution to the system. A linear system may not have a solution at all. If this is the case, we say that the linear system is inconsistent : > 2 Lecture 1 INCONSISTENT ⇔ NO SOLUTION A linear system is called consistent if it has at least one solution: CONSISTENT ⇔ AT LEAST ONE SOLUTION We will see shortly that a consistent linear
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
system will have either just one solution or infinitely many solutions. For example, a linear system cannot have just 4 or 5 solutions. If it has multiple solutions, then it will have infinitely many solutions. Example 1.3. Show that the linear system does not have a solution. −x1 + x2 = 3 x1 − x2 = 1 . Solution. If we add the two equations we get 0 = 4 which is a contradiction. Therefore, there does not exist a list ( s1, s 2) that satisfies the system because this would lead to the contradiction 0 = 4. Example 1.4. Let t be an arbitrary real number and let s1 = −3 > 2 − 2ts2 = 3 > 2 + ts3 = t. Show that for any choice of the parameter t, the list ( s1, s 2, s 3) is a solution to the linear system x1 + x2 + x3 = 0 x1 + 3 x2 − x3 = 3 . Solution. Substitute the list ( s1, s 2, s 3) into the left-hand-side of the first equation (−3 > 2 − 2t) + ( 3 > 2 + t) + t = 0 and in the second equation (−3 > 2 − 2t) + 3( 3 > 2 + t) − t = −3 > 2 + 9 > 2 = 3 Both equations are satisfied for any value of t. Because we can vary t arbitrarily, we get an infinite number of solutions parameterized by t. For example, compute the list ( s1, s 2, s 3)for t = 3 and confirm that the resulting list is a solution to the linear system. > 3 Systems of Linear Equations # 1.2 Matrices We will use matrices to develop systematic methods to solve linear systems
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
and to study the properties of the solution set of a linear system. Informally speaking, a matrix is an array or table consisting of rows and columns . For example, A = 1 −2 1 00 2 −8 8 −4 7 11 −5 is a matrix having m = 3 rows and n = 4 columns. In general, a matrix with m rows and n columns is a m × n matrix and the set of all such matrices will be denoted by Mm×n.Hence, A above is a 3 × 4 matrix. The entry of A in the ith row and jth column will be denoted by aij . A matrix containing only one column is called a column vector and a matrix containing only one row is called a row vector . For example, here is a row vector u = [1 −3 4] and here is a column vector v = [ 3 −1 ] . We can associate to a linear system three matrices: (1) the coefficient matrix, (2) the output column vector, and (3) the augmented matrix. For example, for the linear system 5x1 − 3x2 + 8 x3 = −1 x1 + 4 x2 − 6x3 = 0 2x2 + 4 x3 = 3 the coefficient matrix A, the output vector b, and the augmented matrix [ A b ] are: A = 5 −3 81 4 −60 2 4 , b = −103 , [A b ] = 5 −3 8 −11 4 −6 00 2 4 3 . If a linear system has m equations and n unknowns then the coefficient matrix A must be a m × n matrix, that is, A has m rows and n columns. Using our previously defined notation, we
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
can write this as A ∈ Mm×n.If we are given an augmented matrix, we can write down the associated linear system in an obvious way. For example, the linear system associated to the augmented matrix 1 4 −2 8 12 0 1 −7 2 −40 0 5 −1 7 is x1 + 4 x2 − 2x3 + 8 x4 = 12 x2 − 7x3 + 2 x4 = −45x3 − x4 = 7 . > 4 Lecture 1 We can study matrices without interpreting them as coefficient matrices or augmented ma-trices associated to a linear system. Matrix algebra is a fascinating subject with numerous applications in every branch of engineering, medicine, statistics, mathematics, finance, biol-ogy, chemistry, etc. # 1.3 Solving linear systems In algebra, you learned to solve equations by first “simplifying” them using operations that do not alter the solution set. For example, to solve 2 x = 8 − 2x we can add to both sides 2x and obtain 4 x = 8 and then multiply both sides by 1 > 4 yielding x = 2. We can do similar operations on a linear system. There are three basic operations, called elementary operations , that can be performed: 1. Interchange two equations. 2. Multiply an equation by a nonzero constant. 3. Add a multiple of one equation to another. These operations do not alter the solution set. The idea is to apply these operations itera-tively to simplify the linear system to a point where one can easily write down the solution set. It is convenient to apply elementary operations on the augmented matrix [ A b ] repre-senting the linear system. In this case, we call the operations elementary row operations ,and the process of simplifying the linear system using these operations is called row reduc-tion
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
. The goal with row reducing is to transform the original linear system into one having a triangular structure and then perform back substitution to solve the system. This is best explained via an example. Example 1.5. Use back substitution on the augmented matrix 1 0 −2 −40 1 −1 00 0 1 1 to solve the associated linear system. Solution. Notice that the augmented matrix has a triangular structure. The third row corresponds to the equation x3 = 1. The second row corresponds to the equation x2 − x3 = 0 and therefore x2 = x3 = 1. The first row corresponds to the equation x1 − 2x3 = −4and therefore x1 = −4 + 2 x3 = −4 + 2 = −2. Therefore, the solution is ( −2, 1, 1). > 5 Systems of Linear Equations Example 1.6. Solve the linear system using elementary row operations. −3x1 + 2 x2 + 4 x3 = 12 x1 − 2x3 = −42x1 − 3x2 + 4 x3 = −3 Solution. Our goal is to perform elementary row operations to obtain a triangular structure and then use back substitution to solve. The augmented matrix is −3 2 4 12 1 0 −2 −42 −3 4 −3 . Interchange Row 1 ( R1) and Row 2 ( R2): −3 2 4 12 1 0 −2 −42 −3 4 −3 R1↔R2 −−−−→ 1 0 −2 −4 −3 2 4 12 2 −3 4 −3 As you will see, this first operation will simplify the next step. Add 3 R1 to R2: 1 0 −2 −4 −3 2 4 12 2 −3 4 −3 3R1+R2 −−−−→ 1 0 −2 −40 2 −2 02 −3 4 −3 Add −2R1 to R3: 1
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
0 −2 −40 2 −2 02 −3 4 −3 −2R1+R3 −−−−−→ 1 0 −2 −40 2 −2 00 −3 8 5 Multiply R2 by 1 > 2 : 1 0 −2 −40 2 −2 00 −3 8 5 > 1 > 2R2 −−→ 1 0 −2 −40 1 −1 00 −3 8 5 Add 3 R2 to R3: 1 0 −2 −40 1 −1 00 −3 8 5 3R2+R3 −−−−→ 1 0 −2 −40 1 −1 00 0 5 5 Multiply R3 by 1 > 5 : 1 0 −2 −40 1 −1 00 0 5 5 > 1 > 5R3 −−→ 1 0 −2 −40 1 −1 00 0 1 1 We can continue row reducing but the row reduced augmented matrix is in triangular form. So now use back substitution to solve. The linear system associated to the row reduced > 6 Lecture 1 augmented matrix is x1 − 2x3 = −4 x2 − x3 = 0 x3 = 1 The last equation gives that x3 = 1. From the second equation we obtain that x2 − x3 = 0, and thus x2 = 1. The first equation then gives that x1 = −4 + 2(1) = −2. Thus, the solution to the original system is ( −2, 1, 1). You should verify that ( −2, 1, 1) is a solution to the original system. The original augmented matrix of the previous example is M = −3 2 4 12 1 0 −2 −42 −3 4 −3 → −3x1 + 2 x2 + 4 x3 = 12 x1 − 2x3 = −42x1 − 3x2 + 4 x3 = −3. After row reducing we obtained the row reduced matrix N =
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
1 0 −2 −40 1 −1 00 0 1 1 → x1 − 2x3 = −4 x2 − x3 = 0 x3 = 1 . Although the two augmented matrices M and N are clearly distinct, it is a fact that they have the same solution set. Example 1.7. Using elementary row operations, show that the linear system is inconsistent. x1 + 2 x3 = 1 x2 + x3 = 0 2x1 + 4 x3 = 1 Solution. The augmented matrix is 1 0 2 10 1 1 02 0 4 1 Perform the operation −2R1 + R3: 1 0 2 10 1 1 02 0 4 1 −2R1+R3 −−−−−→ 1 0 2 10 1 1 00 0 0 −1 The last row of the simplified augmented matrix 1 0 2 10 1 1 00 0 0 −1 > 7 Systems of Linear Equations corresponds to the equation 0x1 + 0 x2 + 0 x3 = −1Obviously, there are no numbers x1, x 2, x 3 that satisfy this equation, and therefore, the linear system is inconsistent, i.e., it has no solution. In general, if we obtain a row in an augmented matrix of the form [0 0 0 · · · 0 c] where c is a nonzero number, then the linear system is inconsistent. We will call this type of row an inconsistent row . However, a row of the form [0 1 0 0 0] corresponds to the equation x2 = 0 which is perfectly valid. # 1.4 Geometric interpretation of the solution set The set of points ( x1, x 2) that satisfy the linear system x1 − 2x2 = −1 −x1 + 3 x2 = 3 (1.2) is the intersection of the two lines determined by
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
the equations of the system. The solution for this system is (3 , 2). The two lines intersect at the point ( x1, x 2) = (3 , 2), see Figure 1.1 . Figure 1.1: The intersection point of the two lines is the solution of the linear system ( 1.2 )Similarly, the solution of the linear system x1 − 2x2 + x3 = 0 2x2 − 8x3 = 8 −4x1 + 5 x2 + 9 x3 = −9(1.3) > 8 Lecture 1 is the intersection of the three planes determined by the equations of the system. In this case, there is only one solution: (29 , 16 , 3). In the case of a consistent system of two equations, the solution set is the line of intersection of the two planes determined by the equations of the system, see Figure 1.2 . > the solution set is this line > x1−2x2+x3= 0 > −4x1+ 5 x2+ 9 x3=−9 Figure 1.2: The intersection of the two planes is the solution set of the linear system ( 1.3 ) After this lecture you should know the following: • what a linear system is • what it means for a linear system to be consistent and inconsistent • what matrices are • what are the matrices associated to a linear system • what the elementary row operations are and how to apply them to simplify a linear system • what it means for two matrices to be row equivalent • how to use the method of back substitution to solve a linear system • what an inconsistent row is • how to identify using elementary row operations when a linear system is inconsistent • the geometric interpretation of the solution set of a linear system > 9 Systems of Linear Equations >
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
10 Lecture 2 # Lecture 2 Row Reduction and Echelon Forms In this lecture, we will get more practice with row reduction and in the process introduce two important types of matrix forms. We will also discuss when a linear system has a unique solution, infinitely many solutions, or no solution. Lastly, we will introduce a convenient parameter called the rank of a matrix. # 2.1 Row echelon form (REF) Consider the linear system x1 + 5 x2 − 2x4 − x5 + 7 x6 = −42x2 − 2x3 + 3 x6 = 0 −9x4 − x5 + x6 = −15x5 + x6 = 5 0 = 0 having augmented matrix 1 5 0 −2 −1 7 −40 2 −2 0 0 3 00 0 0 −9 −1 1 −10 0 0 0 5 1 50 0 0 0 0 0 0 . The above augmented matrix has the following properties: P1. All nonzero rows are above any rows of all zeros. P2. The leftmost nonzero entry of a row is to the right of the leftmost nonzero entry of the row above it. 11 Row Reduction and Echelon Forms Any matrix satisfying properties P1 and P2 is said to be in row echelon form (REF ). In REF, the leftmost nonzero entry in a row is called a leading entry : 1 5 0 −2 −1 7 −40 2 −2 0 0 3 00 0 0 −9 −1 1 −10 0 0 0 5 1 50 0 0 0 0 0 0 A consequence of property P2 is that every entry below a leading entry is zero: 1 5 0 −2 −4 −1 −7 0 2 −2 0 0 3 0 0 0 0 −9 −1 1 −1 0 0 0 0 5 1
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
5 0 0 0 0 0 0 0 We can perform elementary row operations, or row reduction , to transform a matrix into REF. Example 2.1. Explain why the following matrices are not in REF. Use elementary row operations to put them in REF. M = 3 −1 0 30 0 0 00 1 3 0 N = 7 5 0 −30 3 −1 10 6 −5 2 Solution. Matrix M fails property P1. To put M in REF we interchange R2 with R3: M = 3 −1 0 30 0 0 00 1 3 0 R2↔R3 −−−−→ 3 −1 0 30 1 3 00 0 0 0 The matrix N fails property P2. To put N in REF we perform the operation −2R2 + R3 → R3: 7 5 0 −30 3 −1 10 6 −5 2 −2R2+R3 −−−−−→ 7 5 0 −30 3 −1 10 0 −3 0 Why is REF useful? Certain properties of a matrix can be easily deduced if it is in REF. For now, REF is useful to us for solving a linear system of equations. If an augmented matrix is in REF, we can use back substitution to solve the system, just as we did in Lecture 1. For example, consider the system 8x1 − 2x2 + x3 = 4 3x2 − x3 = 7 2x3 = 4 > 12 Lecture 2 whose augmented matrix is already in REF: 8 −2 1 40 3 −1 70 0 2 4 From the last equation we obtain that 2 x3 = 4, and thus x3 = 2. Substituting x3 = 2 into the second equation we obtain that x2 = 3. Substituting x3 = 2 and x2 = 3 into
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
the first equation we obtain that x1 = 1. # 2.2 Reduced row echelon form (RREF) Although REF simplifies the problem of solving a linear system, later on in the course we will need to completely row reduce matrices into what is called reduced row echelon form (RREF) . A matrix is in RREF if it is in REF (so it satisfies properties P1 and P2) and in addition satisfies the following properties: P3. The leading entry in each nonzero row is a 1. P4. All the entries above (and below) a leading 1 are all zero. A leading 1 in the RREF of a matrix is called a pivot . For example, the following matrix in RREF: 1 6 0 3 0 00 0 1 −4 0 50 0 0 0 1 7 has three pivots: 1 6 0 3 0 0 0 0 1 −4 0 5 0 0 0 0 1 7 Example 2.2. Use row reduction to transform the matrix into RREF. 0 3 −6 6 4 −53 −7 8 −5 8 93 −9 12 −9 6 15 Solution. The first step is to make the top leftmost entry nonzero: 0 3 −6 6 4 −53 −7 8 −5 8 93 −9 12 −9 6 15 R3↔R1 −−−−→ 3 −9 12 −9 6 15 3 −7 8 −5 8 90 3 −6 6 4 −5 Now create a leading 1 in the first row: 3 −9 12 −9 6 15 3 −7 8 −5 8 90 3 −6 6 4 −5 > 1 > 3R1 −−→ 1 −3 4 −3 2 53 −7 8 −5 8 90 3 −6 6 4 −5 > 13 Row Reduction and Echelon Forms Create zeros under
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
the newly created leading 1: 1 −3 4 −3 2 53 −7 8 −5 8 90 3 −6 6 4 −5 −3R1+R2 −−−−−→ 1 −3 4 −3 2 50 2 −4 4 2 −60 3 −6 6 4 −5 Create a leading 1 in the second row: 1 −3 4 −3 2 50 2 −4 4 2 −60 3 −6 6 4 −5 > 1 > 2R2 −−→ 1 −3 4 −3 2 50 1 −2 2 1 −30 3 −6 6 4 −5 Create zeros under the newly created leading 1: 1 −3 4 −3 2 50 1 −2 2 1 −30 3 −6 6 4 −5 −3R2+R3 −−−−−→ 1 −3 4 −3 2 50 1 −2 2 1 −30 0 0 0 1 4 We have now completed the top-to-bottom phase of the row reduction algorithm. In the next phase, we work bottom-to-top and create zeros above the leading 1’s. Create zeros above the leading 1 in the third row: 1 −3 4 −3 2 50 1 −2 2 1 −30 0 0 0 1 4 −R3+R2 −−−−−→ 1 −3 4 −3 2 50 1 −2 2 0 −70 0 0 0 1 4 1 −3 4 −3 2 50 1 −2 2 0 −70 0 0 0 1 4 −2R3+R1 −−−−−→ 1 −3 4 −3 0 −30 1 −2 2 0 −70 0 0 0 1 4 Create zeros above the leading 1 in the second row: 1 −3 4 −3 0 −30 1 −2 2 0 −70 0 0 0 1 4 3R2+R1 −−−−→ 1 0 −2 3 0 −24 0 1 −2 2 0 −70 0 0 0 1 4 This completes
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
the row reduction algorithm and the matrix is in RREF. Example 2.3. Use row reduction to solve the linear system. 2x1 + 4 x2 + 6 x3 = 8 x1 + 2 x2 + 4 x3 = 8 3x1 + 6 x2 + 9 x3 = 12 Solution. The augmented matrix is 2 4 6 81 2 4 83 6 9 12 > 14 Lecture 2 Create a leading 1 in the first row: 2 4 6 81 2 4 83 6 9 12 1 > 2R1 −−→ 1 2 3 41 2 4 83 6 9 12 Create zeros under the first leading 1: 1 2 3 41 2 4 83 6 9 12 −R1+R2 −−−−−→ 1 2 3 40 0 1 43 6 9 12 1 2 3 40 0 1 43 6 9 12 −3R1+R3 −−−−−→ 1 2 3 40 0 1 40 0 0 0 The system is consistent, however, there are only 2 nonzero rows but 3 unknown variables. This means that the solution set will contain 3 − 2 = 1 free parameter . The second row in the augmented matrix is equivalent to the equation: x3 = 4 . The first row is equivalent to the equation: x1 + 2 x2 + 3 x3 = 4 and after substituting x3 = 4 we obtain x1 + 2 x2 = −8. We now must choose one of the variables x1 or x2 to be a parameter, say t, and solve for the remaining variable. If we set x2 = t then from x1 + 2 x2 = −8 we obtain that x1 = −8 − 2t. We can therefore write the solution set for the linear system as x1 = −8
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
− 2tx2 = tx3 = 4 (2.1) where t can be any real number . If we had chosen x1 to be the parameter, say x1 = t,then the solution set can be written as x1 = tx2 = −4 − 1 > 2 tx3 = 4 (2.2) Although ( 2.1 ) and ( 2.2 ) are two different parameterizations, they both give the same solution set. > 15 Row Reduction and Echelon Forms In general, if a linear system has n unknown variables and the row reduced augmented matrix has r leading entries, then the number of free parameters d in the solution set is d = n − r. Thus, when performing back substitution, we will have to set d of the unknown variables to arbitrary parameters. In the previous example, there are n = 3 unknown variables and the row reduced augmented matrix contained r = 2 leading entries. The number of free parameters was therefore d = n − r = 3 − 2 = 1 . Because the number of leading entries r in the row reduced coefficient matrix determine the number of free parameters, we will refer to r as the rank of the coefficient matrix: r = rank( A). Later in the course, we will give a more geometric interpretation to rank( A). Example 2.4. Solve the linear system represented by the augmented matrix 1 −7 2 −5 8 10 0 1 −3 3 1 −50 0 0 1 −1 4 Solution. The number of unknowns is n = 5 and the augmented matrix has rank r = 3 (leading entries). Thus, the solution set is parameterized by d = 5 − 3 = 2 free variables, call them t and s. The last equation of the augmented matrix is x4
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
− x5 = 4. We choose x5 to be the first parameter so we set x5 = t. Therefore, x4 = 4 + t. The second equation of the augmented matrix is x2 − 3x3 + 3 x4 + x5 = −5and the unassigned variables are x2 and x3. We choose x3 to be the second parameter, say x3 = s. Then x2 = −5 + 3 x3 − 3x4 − x5 = −5 + 3 s − 3(4 + t) − t = −17 − 4t + 3 s. We now use the first equation of the augmented matrix to write x1 in terms of the other variables: x1 = 10 + 7 x2 − 2x3 + 5 x4 − 8x5 = 10 + 7( −17 − 4t + 3 s) − 2s + 5(4 + t) − 8t = −89 − 31 t + 19 s > 16 Lecture 2 Thus, the solution set is x1 = −89 − 31 t + 19 sx2 = −17 − 4t + 3 sx3 = sx4 = 4 + tx5 = t where t and s are arbitrary real numbers. . Choose arbitrary numbers for t and s and substitute the corresponding list ( x1, x 2, . . . , x 5) into the system of equations to verify that it is a solution. # 2.3 Existence and uniqueness of solutions The REF or RREF of an augmented matrix leads to three distinct possibilities for the solution set of a linear system. Theorem 2.5: Let [ A b ] be the augmented matrix of a linear system. One of the following distinct possibilities will occur: 1. The augmented matrix will contain an inconsistent row. 2. All the rows of the augmented matrix are consistent and there are no free parameters.
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
3. All the rows of the augmented matrix are consistent and there are d ≥ 1 variables that must be set to arbitrary parameters In Case 1., the linear system is inconsistent and thus has no solution. In Case 2., the linear system is consistent and has only one (and thus unique ) solution. This case occurs when r = rank( A) = n since then the number of free parameters is d = n − r = 0. In Case 3., the linear system is consistent and has infinitely many solutions. This case occurs when r 0 is the number of free parameters. After this lecture you should know the following: • what the REF is and how to compute it • what the RREF is and how to compute it • how to solve linear systems using row reduction ( Practice!!! ) • how to identify when a linear system is inconsistent • how to identify when a linear system is consistent • what is the rank of a matrix • how to compute the number of free parameters in a solution set • what are the three possible cases for the solution set of a linear system (Theorem 2.5 ) > 17 Row Reduction and Echelon Forms > 18 Lecture 3 # Lecture 3 Vector Equations In this lecture, we introduce vectors and vector equations. Specifically, we introduce the linear combination problem which simply asks whether it is possible to express one vector in terms of other vectors; we will be more precise in what follows. As we will see, solving the linear combination problem reduces to solving a linear system of equations. # 3.1 Vectors in Rn Recall that a column vector in Rn is a
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
n × 1 matrix. From now on, we will drop the “column” descriptor and simply use the word vectors . It is important to emphasize that a vector in Rn is simply a list of n numbers; you are safe (and highly encouraged!) to forget the idea that a vector is an object with an arrow. Here is a vector in R2: v = [ 3 −1 ] . Here is a vector in R3: v = −3011 . Here is a vector in R6: v = 90 −3603 . To indicate that v is a vector in Rn, we will use the notation v ∈ Rn. The mathematical symbol ∈ means “is an element of”. When we write vectors within a paragraph, we will write them using list notation instead of column notation, e.g., v = ( −1, 4) instead of v = [−14 ] . > 19 Vector Equations We can add/subtract vectors, and multiply vectors by numbers or scalars . For example, here is the addition of two vectors: 0 −592 + 4 −301 = 4 −893 . And the multiplication of a scalar with a vector: 3 1 −35 = 3 −915 . And here are both operations combined: −2 4 −83 + 3 −294 = −816 −6 + −627 12 = −14 43 6 . These operations constitute “the algebra” of vectors. As the following example illustrates, vectors can be used in a natural way to represent the solution of a linear system. Example 3.1. Write the general solution in vector form of the linear system represented by the augmented matrix [A b] = 1 −7 2 −5 8 10
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
0 1 −3 3 1 −50 0 0 1 −1 4 Solution. The number of unknowns is n = 5 and the associated coefficient matrix A has rank r = 3. Thus, the solution set is parametrized by d = n − r = 2 parameters. This system was considered in Example 2.4 and the general solution was found to be x1 = −89 − 31 t1 + 19 t2 x2 = −17 − 4t1 + 3 t2 x3 = t2 x4 = 4 + t1 x5 = t1 where t1 and t2 are arbitrary real numbers. The solution in vector form therefore takes the form x = x1 x2 x3 x4 x5 = −89 − 31 t1 + 19 t2 −17 − 4t1 + 3 t2 t2 4 + t1 t1 = −89 −17 040 + t1 −31 −4011 + t2 19 3100 > 20 Lecture 3 A fundamental problem in linear algebra is solving vector equations for an unknown vector. As an example, suppose that you are given the vectors v1 = 4 −83 , v2 = −294 , b = −14 43 6 , and asked to find numbers x1 and x2 such that x1v1 + x2v2 = b, that is, x1 4 −83 + x2 −294 = −14 43 6 . Here the unknowns are the scalars x1 and x2. After some guess and check, we find that x1 = −2 and x2 = 3 is a solution to the problem since −2 4 −83 + 3 −294 = −14 43 6 . In some sense, the vector b is a combination of the vectors v1 and
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
v2. This motivates the following definition. Definition 3.2: Let v1, v2, . . . , vp be vectors in Rn. A vector b is said to be a linear combination of the vectors v1, v2, . . . , vp if there exists scalars x1, x 2, . . . , x p such that x1v1 + x2v2 + · · · + xpvp = b.The scalars in a linear combination are called the coefficients of the linear combination. As an example, given the vectors v1 = 1 −23 , v2 = −24 −6 , v3 = −156 , b = −30 −27 you can verify (and you should!) that 3v1 + 4 v2 − 2v3 = b. Therefore, we can say that b is a linear combination of v1, v2, v3 with coefficients x1 = 3, x2 = 4, and x3 = −2. # 3.2 The linear combination problem The linear combination problem is the following: > 21 Vector Equations Problem: Given vectors v1, . . . , vp and b, is b a linear combination of v1, v2, . . . , vp?For example, say you are given the vectors v1 = 121 , v2 = 110 , v3 = 212 and also b = 01 −2 . Does there exist scalars x1, x 2, x 3 such that x1v1 + x2v2 + x3v3 = b? (3.1) For obvious reasons, equation ( 3.1 ) is called a vector equation and the unknowns are x1, x2, and x3. To gain some intuition with the linear combination problem, let’s do an example by inspection. Example 3.3. Let v1 = (1 , 0, 0), let v2 = (0 , 0, 1), let b1 = (0
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
, 2, 0), and let b2 = ( −3, 0, 7). Are b1 and b2 linear combinations of v1, v2? Solution. For any scalars x1 and x2 x1v1 + x2v2 = x1 00 + 00 x2 = x1 0 x2 6 = 020 and thus no, b1 is not a linear combination of v1, v2, v3. On the other hand, by inspection we have that −3v1 + 7 v2 = −300 + 007 = −307 = b2 and thus yes, b2 is a linear combination of v1, v2, v3. These examples, of low dimension, were more-or-less obvious. Going forward, we are going to need a systematic way to solve the linear combination problem that does not rely on pure inspection. We now describe how the linear combination problem is connected to the problem of solving a system of linear equations. Consider again the vectors v1 = 121 , v2 = 110 , v3 = 212 , b = 01 −2 . Does there exist scalars x1, x 2, x 3 such that x1v1 + x2v2 + x3v3 = b? (3.2) > 22 Lecture 3 First, let’s expand the left-hand side of equation ( 3.2 ): x1v1 + x2v2 + x3v3 = x1 2x1 x1 + x2 x2 0 + 2x3 x3 2x3 = x1 + x2 + 2 x3 2x1 + x2 + x3 x1 + 2 x3 . We want equation ( 3.2 ) to hold so let’s equate the expansion x1v1 + x2v2 + x3v3 with b. In other words, set x1 + x2 + 2 x3 2x1 + x2 + x3 x1 + 2 x3
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
= 01 −2 . Comparing component-by-component in the above relationship, we seek scalars x1, x 2, x 3 satisfying the equations x1 + x2 + 2 x3 = 0 2x1 + x2 + x3 = 1 x1 + 2 x3 = −2. (3.3) This is just a linear system consisting of m = 3 equations and n = 3 unknowns! Thus, the linear combination problem can be solved by solving a system of linear equations for the unknown scalars x1, x 2, x 3. We know how to do this. In this case, the augmented matrix of the linear system ( 3.3 ) is [A b ] = 1 1 2 02 1 1 11 0 2 −2 Notice that the 1st column of A is just v1, the second column is v2, and the third column is v3, in other words, the augment matrix is [A b ] = [v1 v2 v3 b] Applying the row reduction algorithm, the solution is x1 = 0 , x 2 = 2 , x 3 = −1and thus these coefficients solve the linear combination problem. In other words, 0v1 + 2 v2 − v3 = b In this case, there is only one solution to the linear system, so b can be written as a linear combination of v1, v2, . . . , vp in only one (or unique) way. You should verify these computations. We summarize the previous discussion with the following: The problem of determining if a given vector b is a linear combination of the vectors v1, v2, . . . , vp is equivalent to solving the linear system of equations with augmented matrix [A b] = [v1 v2 · · · vp b] . > 23 Vector Equations Applying the existence and
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
uniqueness Theorem 2.5 , the only three possibilities to the linear combination problem are: 1. If the linear system is inconsistent then b is not a linear combination of v1, v2, . . . , vp,i.e., there does not exist scalars x1, x 2, . . . , x p such that x1v1 + x2v2 + · · · + xpvp = b. 2. If the linear system is consistent and the solution is unique then b can be written as a linear combination of v1, v2, . . . , vp in only one way. 3. If the the linear system is consistent and the solution set has free parameters, then b can be written as a linear combination of v1, v2, . . . , vp in infinitely many ways. Example 3.4. Is the vector b = (7 , 4, −3) a linear combination of the vectors v1 = 1 −2 −5 , v2 = 256 ? Solution. Form the augmented matrix: [v1 v2 b] = 1 2 7 −2 5 4 −5 6 −3 The RREF of the augmented matrix is 1 0 30 1 20 0 0 and therefore the solution is x1 = 3 and x2 = 2. Therefore, yes, b is a linear combination of v1, v2:3v1 + 2 v2 = 3 1 −2 −5 + 2 256 = 74 −3 = b Notice that the solution set does not contain any free parameters because n = 2 (unknowns) and r = 2 (rank) and so d = 0. Therefore, the above linear combination is the only way to write b as a linear combination of v1 and v2. Example 3.5. Is the vector b = (1 , 0, 1) a
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
linear combination of the vectors v1 = 102 , v2 = 010 , v3 = 214 ? > 24 Lecture 3 Solution. The augmented matrix of the corresponding linear system is 1 0 2 10 1 1 02 0 4 1 . After row reducing we obtain that 1 0 2 10 1 1 00 0 0 −1 . The last row is inconsistent, and therefore the linear system does not have a solution. There-fore, no, b is not a linear combination of v1, v2, v3. Example 3.6. Is the vector b = (8 , 8, 12) a linear combination of the vectors v1 = 213 , v2 = 426 , v3 = 649 ? Solution. The augmented matrix is 2 4 6 81 2 4 83 6 9 12 REF −−→ 1 2 3 40 0 1 40 0 0 0 . The system is consistent and therefore b is a linear combination of v1, v2, v3. In this case, the solution set contains d = 1 free parameters and therefore, it is possible to write b as a linear combination of v1, v2, v3 in infinitely many ways. In terms of the parameter t, the solution set is x1 = −8 − 2tx2 = tx3 = 4 Choosing any t gives scalars that can be used to write b as a linear combination of v1, v2, v3.For example, choosing t = 1 we obtain x1 = −10, x2 = 1, and x3 = 4, and you can verify that −10 v1 + v2 + 4 v3 = −10 213 + 426 + 4 649 = 8812 = b Or, choosing t =
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
−2 we obtain x1 = −4, x2 = −2, and x3 = 4, and you can verify that −4v1 − 2v2 + 4 v3 = −4 213 − 2 426 + 4 649 = 8812 = b > 25 Vector Equations We make a few important observations on linear combinations of vectors. Given vectors v1, v2, . . . , vp, there are certain vectors b that can be written as a linear combination of v1, v2, . . . , vp in an obvious way. The zero vector b = 0 can always be written as a linear combination of v1, v2, . . . , vp: 0 = 0 v1 + 0 v2 + · · · + 0 vp. Each vi itself can be written as a linear combination of v1, v2, . . . , vp, for example, v2 = 0 v1 + (1) v2 + 0 v3 + · · · + 0 vp. More generally, any scalar multiple of vi can be written as a linear combination of v1, v2, . . . , vp,for example, xv2 = 0 v1 + xv2 + 0 v3 + · · · + 0 vp. By varying the coefficients x1, x 2, . . . , x p, we see that there are infinitely many vectors b that can be written as a linear combination of v1, v2, . . . , vp. The “space” of all the possible linear combinations of v1, v2, . . . , vp has a name, which we introduce next. # 3.3 The span of a set of vectors Given a set of vectors {v1, v2, . . . , vp}, we have been considering the problem of whether or not a given
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
vector b is a linear combination of {v1, v2, . . . , vp}. We now take another point of view and instead consider the idea of generating all vectors that are a linear combination of {v1, v2, . . . , vp}. So how do we generate a vector that is guaranteed to be a linear combination of {v1, v2, . . . , vp}? For example, if v1 = (2 , 1, 3), v2 = (4 , 2, 6) and v3 = (6 , 4, 9) then −10 v1 + v2 + 4 v3 = −10 213 + 426 + 4 649 = 8812 . Thus, by construction, the vector b = (8 , 8, 12) is a linear combination of {v1, v2, v3}. This discussion leads us to the following definition. Definition 3.7: Let v1, v2, . . . , vp be vectors. The set of all vectors that are a linear combination of v1, v2, . . . , vp is called the span of v1, v2, . . . , vp, and we denote it by S = span {v1, v2, . . . , vp}. By definition, the span of a set of vectors is a collection of vectors, or a set of vectors. If b is a linear combination of v1, v2, . . . , vp then b is an element of the set span {v1, v2, . . . , vp},and we write this as b ∈ span {v1, v2, . . . , vp}. > 26 Lecture 3 By definition, writing that b ∈ span {v1, v2, . . . , vp} implies that there exists scalars x1, x 2, . . . , x p such that x1v1 + x2v2 +
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
· · · + xpvp = b. Even though span {v1, v2, . . . , vp} is an infinite set of vectors, it is not necessarily true that it is the whole space Rn.The set span {v1, v2, . . . , vp} is just a collection of infinitely many vectors but it has some geometric structure. In R2 and R3 we can visualize span {v1, v2, . . . , vp}. In R2, the span of a single nonzero vector, say v ∈ R2, is a line through the origin in the direction of v, see Figure 3.1 . Figure 3.1: The span of a single non-zero vector in R2.In R2, the span of two vectors v1, v2 ∈ R2 that are not multiples of each other is all of R2. That is, span {v1, v2} = R2. For example, with v1 = (1 , 0) and v2 = (0 , 1), it is true that span {v1, v2} = R2. In R3, the span of two vectors v1, v2 ∈ R3 that are not multiples of each other is a plane through the origin containing v1 and v2, see Figure 3.2 . In R3, the > −4−4 > −4−4 > −3−3 > −2−2 > −3−3 > −1−1 > 00zz > −2−2 > 11 > −4−4 > 22 > −3−3−1−1 > span {v,w}span {v,w} > 33 > 44 > −2−200 > yy−1−11100 > xx1122223333 > 44 Figure 3.2: The span of two vectors, not multiples of each other, in R3.span of a single vector is a line through the origin, and the span of three vectors that do not depend on each other (we will make this precise soon) is all of R3. Example 3.8. Is the vector b = (7 , 4, −3) in the span of
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
the vectors v1 = (1 , −2, −5) , v2 =(2 , 5, 6)? In other words, is b ∈ span {v1, v2}? 27 Vector Equations Solution. By definition, b is in the span of v1 and v2 if there exists scalars x1 and x2 such that x1v1 + x2v2 = b, that is, if b can be written as a linear combination of v1 and v2. From our previous discussion on the linear combination problem, we must consider the augmented matrix [v1 v2 b].Using row reduction, the augmented matrix is consistent and there is only one solution (see Example 3.4 ). Therefore, yes, b ∈ span {v1, v2} and the linear combination is unique. Example 3.9. Is the vector b = (1 , 0, 1) in the span of the vectors v1 = (1 , 0, 2) , v2 =(0 , 1, 0) , v3 = (2 , 1, 4)? Solution. From Example 3.5 , we have that [v1 v2 v3 b] REF −−→ 1 0 2 10 1 1 00 0 0 −1 The last row is inconsistent and therefore b is not in span {v1, v2, v3}. Example 3.10. Is the vector b = (8 , 8, 12) in the span of the vectors v1 = (2 , 1, 3) , v2 =(4 , 2, 6) , v3 = (6 , 4, 9)? Solution. From Example 3.6 , we have that [v1 v2 v3 b] REF −−→ 1 2 3 40 0 1 40 0 0 0 . The system is consistent and therefore b ∈ span {v1, v2, v3}. In this case, the solution set contains d = 1 free parameters and therefore, it is possible to write b as a linear combination of v1, v2, v3 in infinitely many ways. Example
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
3.11. Answer the following with True or False, and explain your answer. (a) The vector b = (1 , 2, 3) is in the span of the set of vectors −130 , 2 −70 , 4 −50 . (b) The solution set of the linear system whose augmented matrix is [v1 v2 v3 b] is the same as the solution set of the vector equation x1v1 + x2v2 + x3v3 = b.(c) Suppose that the augmented matrix [v1 v2 v3 b] has an inconsistent row. Then either b can be written as a linear combination of v1, v2, v3 or b ∈ span {v1, v2, v3}.(d) The span of the vectors {v1, v2, v3} (at least one of which is nonzero) contains only the vectors v1, v2, v3 and the zero vector 0. > 28 Lecture 3 After this lecture you should know the following: • what a vector is • what a linear combination of vectors is • what the linear combination problem is • the relationship between the linear combination problem and the problem of solving linear systems of equations • how to solve the linear combination problem • what the span of a set of vectors is • the relationship between what it means for a vector b to be in the span of v1, v2, . . . , vp and the problem of writing b as a linear combination of v1, v2, . . . , vp • the geometric interpretation of the span of a set of vectors > 29 Vector Equations > 30 Lecture 4 # Lecture 4 The Matrix Equation Ax = b In this lecture, we introduce the operation of matrix-vector multiplication and how it relates to the linear combination problem. # 4.1 Matrix-vector multiplication
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
We begin with the definition of matrix-vector multiplication. Definition 4.1: Given a matrix A ∈ Mm×n and a vector x ∈ Rn, A = a11 a12 a13 · · · a1n a21 a22 a23 · · · a2n ... ... ... ... ... am1 am2 am3 · · · amn , x = x1 x2 ... xn , we define the product of A and x as the vector Ax in Rm given by Ax = a11 a12 a13 · · · a1n a21 a22 a23 · · · a2n ... ... ... ... ... am1 am2 am3 · · · amn ︸ ︷︷ ︸ > A x1 x2 ... xn ︸ ︷︷ ︸ > x = a11 x1 + a12 x2 + · · · + a1nxn a21 x1 + a22 x2 + · · · + a2nxn ... am1x1 + am2x2 + · · · + amn xn . For the product Ax to be well-defined, the number of columns of A must equal the number of components of x. Another way of saying this is that the outer dimension of A must equal the inner dimension of x:(m × n) · (n × 1) → m × 1 Example 4.2. Compute Ax . 31 The Matrix Equation Ax = b (a) A = [1 −1 3 0] , x = 2 −4 −38 (b) A = [3 3 −24 −4 −1 ] , x = 10 −1 (c) A = −1 1 04 1 −23 −3 30 −2 −3 , x = −12 −2 Solution. We compute: (a) Ax = [1 −1 3 0] 2 −4 −38 = [(1)(2) + ( −1)( −4) + (3)( −3) + (0)(8) ] =
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
[−3] (b) Ax = [3 3 −24 −4 −1 ] 10 −1 = [ (3)(1) + (3)(0) + ( −2)( −1) (4)(1) + ( −4)(0) + ( −1)( −1) ] = [55 ] > 32 Lecture 4 (c) Ax = −1 1 04 1 −23 −3 30 −2 −3 −12 −2 = (−1)( −1) + (1)(2) + (0)( −2) (4)( −1) + (1)(2) + ( −2)( −2) (3)( −1) + ( −3)(2) + (3)( −2) (0)( −1) + ( −2)(2) + ( −3)( −2) = 32 −15 2 We now list two important properties of matrix-vector multiplication. Theorem 4.3: Let A be an m × n a matrix. (a) For any vectors u, v in Rn it holds that A(u + v) = Au + Av . (b) For any vector u and scalar c it holds that A(cu) = c(Au ). Example 4.4. For the given data, verify that the properties of Theorem 4.3 hold: A = [3 −32 1 ] , u = [−13 ] , v = [ 2 −1 ] , c = −2. # 4.2 Matrix-vector multiplication and linear combina-tions Recall the general definition of matrix-vector multiplication Ax is a11 a12 a13 · · · a1n a21 a22 a23 · · · a2n ... ... ... ... ... am1 am2 am3 · · · amn x1 x2 ... xn = a11 x1 + a12 x2 + · · · + a1nxn a21 x1 + a22 x2 + · · · + a2nxn ... am1x1 + am2x2 + · · · + amn xn (4.1) > 33 The Matrix Equation Ax = b There is an important way to decompose matrix-vector multiplication involving a linear combination. To see how, let v1, v2,
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
. . . , vn denote the columns of A and consider the following linear combination: x1v1 + x2v2 + · · · + xnvn = x1a11 x1a21 ... x1am1 + x2a12 x2a22 ... x2am2 + · · · + xna1n xna2n ... xnamn = x1a11 + x2a12 + · · · + xna1n x1a21 + x2a22 + · · · + xna2n ... x1am1 + x2am2 + · · · + xnamn . (4.2) We observe that expressions ( 4.1 ) and ( 4.2 ) are equal! Therefore, if A = [v1 v2 · · · vn ] and x = ( x1, x 2, . . . , x n) then Ax = x1v1 + x2v2 + · · · + xnvn. In summary, the vector Ax is a linear combination of the columns of A where the scalar in the linear combination are the components of x! This (important) observation gives an alternative way to compute Ax . Example 4.5. Given A = −1 1 04 1 −23 −3 30 −2 −3 , x = −12 −2 , compute Ax in two ways: (1) using the original Definition 4.1 , and (2) as a linear combination of the columns of A. # 4.3 The matrix equation problem As we have seen, with a matrix A and any vector x, we can produce a new output vector via the multiplication Ax . If A is a m × n matrix then we must have x ∈ Rn and the output vector Ax is in Rm. We now introduce the following problem: Problem: Given a matrix A ∈ Mm×n and a vector b ∈ Rm, find, if possible, a vector x ∈ Rn such that Ax =
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
b. (⋆)Equation ( ⋆) is a matrix equation where the unknown variable is x. If u is a vector such that Au = b, then we say that u is a solution to the equation Ax = b. For example, > 34 Lecture 4 suppose that A = [1 01 0 ] , b = [−37 ] . Does the equation Ax = b have a solution? Well, for any x = [x1 x2 ] we have that Ax = [1 01 0 ] [ x1 x2 ] = [x1 x1 ] and thus any output vector Ax has equal entries. Since b does not have equal entries then the equation Ax = b has no solution. We now describe a systematic way to solve matrix equations. As we have seen, the vector Ax is a linear combination of the columns of A with the coefficients given by the components of x. Therefore, the matrix equation problem is equivalent to the linear combination problem. In Lecture 2, we showed that the linear combination problem can be solved by solving a system of linear equations. Putting all this together then, if A = [v1 v2 · · · vn ] and b ∈ Rm then: To find a vector x ∈ Rn that solves the matrix equation Ax = b we solve the linear system whose augmented matrix is [A b] = [v1 v2 · · · vn b] . From now on, a system of linear equations such as a11 x1 + a12 x2 + a13 x3 + · · · + a1nxn = b1 a21 x1 + a22 x2 + a23 x3 + · · · + a2nxn = b2 a31 x1 + a32 x2 + a33 x3 + · · · + a3nxn = b3 ... ... ...
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
... ... am1x1 + am2x2 + am3x3 + · · · + amn xn = bm will be written in the compact form Ax = b where A is the coefficient matrix of the linear system, b is the output vector, and x is the unknown vector to be solved for. We summarize our findings with the following theorem. Theorem 4.6: Let A ∈ Mm×n and b ∈ Rm. The following statements are equivalent: (a) The equation Ax = b has a solution. (b) The vector b is a linear combination of the columns of A.(c) The linear system represented by the augmented matrix [A b] is consistent. > 35 The Matrix Equation Ax = b Example 4.7. Solve, if possible, the matrix equation Ax = b if A = 1 3 −41 5 2 −3 −7 −6 , b = −2412 . Solution. First form the augmented matrix: [A b ] = 1 3 −4 −21 5 2 4 −3 −7 −6 12 Performing the row reduction algorithm we obtain that 1 3 −4 −21 5 2 4 −3 −7 −6 12 ∼ 1 3 −4 −20 1 3 30 0 −12 0 . Here r = rank( A) = 3 and therefore d = 0, i.e., no free parameters. Peforming back substitution we obtain that x1 = −11, x2 = 3, and x3 = 0. Thus, the solution to the matrix equation is unique (no free parameters) and is given by x = −11 30 Let’s verify that Ax = b: Ax = 1 3 −41 5 2 −3 −7 −6 −11 30 = −11 + 9 + 0 −11 + 15 + 0 33 − 21 = −2412
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
= b In other words, b is a linear combination of the columns of A: −11 11 −3 + 3 357 + 0 −42 −6 = −2412 > 36 Lecture 4 Example 4.8. Solve, if possible, the matrix equation Ax = b if A = [1 22 4 ] , b = [ 3 −4 ] . Solution. Row reducing the augmented matrix [A b] we get [1 2 32 4 −4 ] −2R1+R2 −−−−−→ [1 2 30 0 −10 ] . The last row is inconsistent and therefore there is no solution to the matrix equation Ax = b.In other words, b is not a linear combination of the columns of A. Example 4.9. Solve, if possible, the matrix equation Ax = b if A = [1 −1 20 3 6 ] , b = [ 2 −1 ] . Solution. First note that the unknown vector x is in R3 because A has n = 3 columns. The linear system Ax = b has m = 2 equations and n = 3 unknowns. The coefficient matrix A has rank r = 2, and therefore the solution set will contain d = n − r = 1 parameter. The augmented matrix [A b] is [A b] = [1 −1 2 20 3 6 −1 ] . Let x3 = t be the parameter and use the last row to solve for x2: x2 = −1 > 3 − 2t Now use the first row to solve for x1: x1 = 2 + x2 − 2x3 = 2 + ( −1 > 3 − 2t) − 2t = 5 > 3 − 4t. Thus, the solution set to the linear system is x1 = 5 > 3 − 4tx2 = −1 >
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
3 − 2tx3 = t where t is an arbitrary number. Therefore, the matrix equation Ax = b has an infinite number of solutions and they can all be written as x = > 5 > 3 − 4t −1 > 3 − 2tt > 37 The Matrix Equation Ax = b where t is an arbitrary number. Equivalently, b can be written as a linear combination of the columns of A in infinitely many ways. For example, choosing t = −1 gives the particular solution x = 17 /3 −7/3 −1 and you can verify that A 17 /3 −7/3 −1 = b. Recall from Definition 3.7 that the span of a set of vectors v1, v2, . . . , vp, which we denoted by span {v1, v2, . . . , vp}, is the space of vectors that can be written as a linear combination of the vectors v1, v2, . . . , vp. Example 4.10. Is the vector b in the span of the vectors v1, v2? b = 044 , v1 = 3 −21 , v2 = −561 Solution. The vector b is in span {v1, v2} if we can find scalars x1, x 2 such that x1v1 + x2v2 = b. If we let A ∈ R3×2 be the matrix A = [ v1 v2] = 3 −5 −2 61 1 then we need to solve the matrix equation Ax = b. Note that here x = [x1 x2 ] ∈ R2.Performing row reduction on the augmented matrix [ A b ] we get that 3 −5 0 −2 6 41 1 4 ∼ 1 0 2.50 1 1.50 0 0 Therefore, the linear system
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
is consistent and has solution x = [2.51.5 ] Therefore, b is in span {v1, v2}, and b can be written in terms of v1 and v2 as 2.5v1 + 1 .5v2 = b > 38 Lecture 4 If v1, v2, . . . , vp are vectors in Rn and it happens to be true that span {v1, v2, . . . , vp} = Rn then we would say that the set of vectors {v1, v2, . . . , vp} spans all of Rn. From Theorem 4.6 ,we have the following. Theorem 4.11: Let A ∈ Mm×n be a matrix with columns v1, v2, . . . , vn, that is, A =[v1 v2 · · · vn ]. The following are equivalent: (a) span {v1, v2, . . . , vn} = Rm (b) Every b ∈ Rm can be written as a linear combination of v1, v2, . . . , vn.(c) The matrix equation Ax = b has a solution for any b ∈ Rm.(d) The rank of A is m. Example 4.12. Do the vectors v1, v2, v3 span R3? v1 = 1 −35 , v2 = 2 −42 , v3 = −123 Solution. From Theorem 4.11 , the vectors v1, v2, v3 span R3 if the matrix A = [v1 v2 v3 ] has rank r = 3 (leading entries in its REF/RREF). The RREF of A is 1 2 −1 −3 −4 25 2 3 ∼ 1 0 00 1 00 0 1 which does indeed have r = 3 leading entries. Therefore, regardless of the choice of b ∈ R3,the augmented matrix [ A b ] will be consistent. Therefore, the vectors v1, v2, v3 span R3:span {v1, v2, v3}
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
= R3. In other words, every vector b ∈ R3 can be written as a linear combination of v1, v2, v3. After this lecture you should know the following: • how to multiply a matrix A with a vector x • that the product Ax is a linear combination of the columns of A • how to solve the matrix equation Ax = b if A and b are known • how to determine if a set of vectors {v1, v2, . . . , vp} in Rm spans all of Rm • the relationship between the equation Ax = b, when b can be written as a linear combination of the columns of A, and when the augmented matrix [A b] is consistent (Theorem 4.6 ) • when the columns of a matrix A ∈ Mm×n span all of Rm (Theorem 4.11 ) • the basic properties of matrix-vector multiplication Theorem 4.3 > 39 The Matrix Equation Ax = b > 40 Lecture 5 # Lecture 5 Homogeneous and Nonhomogeneous Systems # 5.1 Homogeneous linear systems We begin with a definition. Definition 5.1: A linear system of the form Ax = 0 is called a homogeneous linear system. A homogeneous system Ax = 0 always has at least one solution, namely, the zero solution because A0 = 0. A homogeneous system is therefore always consistent. The zero solution x = 0 is called the trivial solution and any non-zero solution is called a nontrivial solution . From the existence and uniqueness theorem (Theorem 2.5 ), we know that a consistent linear system will have either one solution or infinitely many solutions. Therefore, a homogeneous linear system has nontrivial solutions if and only if its solution set has at least one parameter. Recall that the number of parameters in the
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
solution set is d = n − r, where r is the rank of the coefficient matrix A and n is the number of unknowns. Example 5.2. Does the linear homogeneous system have any nontrivial solutions? 3x1 + x2 − 9x3 = 0 x1 + x2 − 5x3 = 0 2x1 + x2 − 7x3 = 0 Solution. The linear system will have a nontrivial solution if the solution set has at least one free parameter. Form the augmented matrix: 3 1 −9 01 1 −5 02 1 −7 0 41 Homogeneous and Nonhomogeneous Systems The RREF is: 3 1 −9 01 1 −5 02 1 −7 0 ∼ 1 0 −2 00 1 −3 00 0 0 0 The system is consistent. The rank of the coefficient matrix is r = 2 and thus there will be d = 3 − 2 = 1 free parameter in the solution set. If we let x3 be the free parameter, say x3 = t, then from the row equivalent augmented matrix 1 0 −2 00 1 −3 00 0 0 0 we obtain that x2 = 3 x3 = 3 t and x1 = 2 x3 = 2 t. Therefore, the general solution of the linear system is x1 = 2 tx2 = 3 tx3 = t The general solution can be written in vector notation as x = 231 t Or more compactly if we let v = 231 then x = vt. Hence, any solution x to the linear system can be written as a linear combination of the vector v = 231 . In other words, the solution set of the linear system is the span of the vector v:span {v}. Notice that in the
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
previous example, when solving a homogeneous system Ax = 0 using row reduction, the last column of the augmented matrix [A 0] remains unchanged (always 0) after every elementary row operation. Hence, to solve a homogeneous system, we can row reduce the coefficient matrix A only and then set all rows equal to zero when performing back substitution. Example 5.3. Find the general solution of the homogenous system Ax = 0 where A = 1 2 2 1 43 7 7 3 13 2 5 5 2 9 . > 42 Lecture 5 Solution. After row reducing we obtain A = 1 2 2 1 43 7 7 3 13 2 5 5 2 9 ∼ 1 0 0 1 20 1 1 0 10 0 0 0 0 Here n = 5, and r = 2, and therefore the number of parameters in the solution set is d = n − r = 3. The second row of rref (A) gives the equation x2 + x3 + x5 = 0 . Setting x5 = t1 and x3 = t2 as free parameters we obtain that x2 = −x3 − x5 = −t2 − t1. From the first row we obtain the equation x1 + x4 + 2 x5 = 0 The unknown x5 has already been assigned, so we must now choose either x1 or x4 to be a parameter. Choosing x4 = t3 we obtain that x1 = −x4 − 2x5 = −t3 − 2t1 In summary, the general solution can be written as x = −t3 − 2t1 −t2 − t1 t2 t3 t1 = t1 −2 −1001 ︸ ︷︷ ︸ > v1 +t2 0 −1100 ︸ ︷︷ ︸ > v2 +t3 −10010 ︸ ︷︷ ︸
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
> v3 = t1v1 + t2v2 + t3v3 where t1, t 2, t 3 are arbitrary parameters. In other words, any solution x is in the span of v1, v2, v3: x ∈ span {v1, v2, v3}. The form of the general solution in Example 5.3 holds in general and is summarized in the following theorem. Theorem 5.4: Consider the homogenous linear system Ax = 0, where A ∈ Mm×n and 0 ∈ Rm. Let r be the rank of A.1. If r = n then the only solution to the system is the trivial solution x = 0.2. Otherwise, if r 43 Homogeneous and Nonhomogeneous Systems In other words, any solution x is in the span of v1, v2, . . . , vd: x ∈ span {v1, v2, . . . , vd}. A solution x to a homogeneous system written in the form x = t1v1 + t2v2 + · · · + tpvd is said to be in parametric vector form . # 5.2 Nonhomogeneous systems As we have seen, a homogeneous system Ax = 0 is always consistent. However, if b is non-zero, then the nonhomogeneous linear system Ax = b may or may not have a solution. A natural question arises: What is the relationship between the solution set of the homogeneous system Ax = 0 and that of the nonhomogeneous system Ax = b when it is consistent? To answer this question, suppose that p is a solution to the nonhomogeneous system Ax = b,that is, Ap
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
= b. And suppose that v is a solution to the homogeneous system Ax = 0, that is, Av = 0. Now let q = p + v. Then Aq = A(p + v)= Ap + Av = b + 0 = b. Therefore, Aq = b. In other words, q = p + v is also a solution of Ax = b. We have therefore proved the following theorem. Theorem 5.5: Suppose that the linear system Ax = b is consistent and let p be a solution. Then any other solution q of the system Ax = b can be written in the form q = p + v, for some vector v that is a solution to the homogeneous system Ax = 0.Another way of stating Theorem 5.5 is the following: If the linear system Ax = b is consistent and has solutions p and q, then the vector v = q−p is a solution to the homogeneous system Ax = 0. The proof is a simple computation: Av = A(q − p) = Aq − Ap = b − b = 0. More generally, any solution of Ax = b can be written in the form q = p + t1v1 + t2v2 + · · · + tpvd where p is one particular solution of Ax = b and the vectors v1, v2, . . . , vd span the solution set of the homogeneous system Ax = 0. > 44 Lecture 5 There is a useful geometric interpretation of the solution set of a general linear system. We saw in Lecture 3 that we can interpret the span of a set of vectors as a plane containing the zero vector 0. Now, the general solution of Ax = b can be written as x =
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
p + t1v1 + t2v2 + · · · + tpvd. Therefore, the solution set of Ax = b is a shift of the span {v1, v2, . . . , vd} by the vector p.This is illustrated in Figure 5.1 . > bbbbb 0pv tvp + tv span {v} p + span {v} Figure 5.1: The solution sets of a homogeneous and nonhomogeneous system. Example 5.6. Write the general solution, in parametric vector form, of the linear system 3x1 + x2 − 9x3 = 2 x1 + x2 − 5x3 = 0 2x1 + x2 − 7x3 = 1 . Solution. The RREF of the augmented matrix is: 3 1 −9 21 1 −5 02 1 −7 1 ∼ 1 0 −2 10 1 −3 −10 0 0 0 The system is consistent and the rank of the coefficient matrix is r = 2. Therefore, there are d = 3 − 2 = 1 parameters in the solution set. Letting x3 = t be the parameter, from the second row of the RREF we have x2 = 3 t − 1And from the first row of the RREF we have x1 = 2 t + 1 Therefore, the general solution of the system in parametric vector form is x = 2t + 1 3t − 1 t = 1 −10 ︸ ︷︷ ︸ > p +t 231 ︸︷︷ ︸ > v > 45 Homogeneous and Nonhomogeneous Systems You should check that p = (1 , −1, 0) solves the linear system Ax = b, and that v = (2 , 3, 1) solves the homogeneous system Ax = 0. Example 5.7. Write the general solution, in parametric vector form, of the linear system represented by the augmented matrix 3 −3
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
6 3 −1 1 −2 −12 −2 4 2 . Solution. Write the general solution, in parametric vector form, of the linear system repre-sented by the augmented matrix 3 −3 6 3 −1 1 −2 −12 −2 4 2 The RREF of the augmented matrix is 3 −3 6 3 −1 1 −2 −12 −2 4 2 ∼ 1 −1 2 10 0 0 00 0 0 0 Here n = 3, r = 1 and therefore the solution set will have d = 2 parameters. Let x3 = t1 and x2 = t2. Then from the first row we obtain x1 = 1 + x2 − 2x3 = 1 + t2 − 2t1 The general solution in parametric vector form is therefore x = 100 ︸︷︷ ︸ > p +t1 −201 ︸ ︷︷ ︸ > v1 +t2 110 ︸︷︷ ︸ > v2 You should verify that p is a solution to the linear system Ax = b: Ap = b And that v1 and v2 are solutions to the homogeneous linear system Ax = 0: Av 1 = Av 2 = 0 > 46 Lecture 5 # 5.3 Summary The material in this lecture is so important that we will summarize the main results. The solution set of a linear system Ax = b can be written in the form x = p + t1v1 + t2v2 + · · · + tdvd where Ap = b and where each of the vectors v1, v2, . . . , vd satisfies Av i = 0. Loosely speaking, {Solution set of Ax = b} = p + {Solution set of Ax = 0} or {Solution set of Ax = b} = p + span {v1, v2, . . . ,
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
vd} where p satisfies Ap = b and Av i = 0. After this lecture you should know the following: • what a homogeneous/nonhomogenous linear system is • when a homogeneous linear system has nontrivial solutions • how to write the general solution set of a homogeneous system in parametric vector form Theorem 5.4 ) • how to write the solution set of a nonhomogeneous system in parametric vector form Theorem 5.5 ) • the relationship between the solution sets of the nonhomogeneous equation Ax = b and the homogeneous equation Ax = 0 > 47 Homogeneous and Nonhomogeneous Systems > 48 Lecture 6 # Lecture 6 Linear Independence # 6.1 Linear independence In Lecture 3, we defined the span of a set of vectors {v1, v2, . . . , vn} as the collection of all possible linear combinations t1v1 + t2v2 + · · · + tnvn and we denoted this set as span {v1, v2, . . . , vn}. Thus, if x ∈ span {v1, v2, . . . , vn} then by definition there exists scalars t1, t 2, . . . , t n such that x = t1v1 + t2v2 + · · · + tnvn. A natural question that arises is whether or not there are multiple ways to express x as a linear combination of the vectors v1, v2, . . . , vn. For example, if v1 = (1 , 2), v2 = (0 , 1), v3 = ( −1, −1), and x = (3 , −1) then you can verify that x ∈ span {v1, v2, v3} and x can be written in infinitely many ways using v1, v2, v3. Here are three ways: x = 3 v1 − 7v2 + 0 v3 x = −4v1 + 0 v2
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
− 7v3 x = 0 v1 − 4v2 − 3v3. The fact that x can be written in more than one way in terms of v1, v2, v3 suggests that there might be a redundancy in the set {v1, v2, v3}. In fact, it is not hard to see that v3 = −v1 +v2,and thus v3 ∈ span {v1, v2}. The preceding discussion motivates the following definition. Definition 6.1: A set of vectors {v1, v2, . . . , vn} is said to be linearly dependent if some vj can be written as a linear combination of the other vectors, that is, if vj ∈ span {v1, . . . , vj−1, vj+1 , . . . , vn}. If {v1, v2, . . . , vn} is not linearly dependent then we say that {v1, v2, . . . , vn} is linearly independent . 49 Linear Independence Example 6.2. Consider the vectors v1 = 123 , v2 = 456 , v3 = 210 . Show that they are linearly dependent . Solution. By inspection, we have 2v1 + v3 = 246 + 210 = 456 = v2 Thus, v2 ∈ span {v1, v3} and therefore {v1, v2, v3} is linearly dependent . Notice that in the previous example, the equation 2 v1 + v3 = v2 is equivalent to 2v1 − v2 + v3 = 0. Hence, because {v1, v2 v3} is a linearly dependent set, it is possible to write the zero vector 0 as a linear combination of {v1, v2 v3} where not all the coefficients in the linear combination are zero . This leads to the following characterization of linear independence. Theorem 6.3: The set of vectors {v1, v2, . . . ,
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
vn} is linearly independent if and only if 0 can be written in only one way as a linear combination of {v1, v2, . . . , vn}. In other words, if t1v1 + t2v2 + · · · + tnvn = 0 then necessarily the coefficients t1, t 2, . . . , t n are all zero. Proof. If {v1, v2, . . . , vn} is linearly independent then every vector x ∈ span {v1, v2, . . . , vn} can be written uniquely as a linear combination of {v1, v2, . . . , vn}, and this applies to the particular case of the zero vector x = 0.Now assume that 0 can be written uniquely as a linear combination of {v1, v2, . . . , vn}.In other words, assume that if t1v1 + t2v2 + · · · + tnvn = 0 then t1 = t2 = · · · = tn = 0. Now take any x ∈ span {v1, v2, . . . , vn} and suppose that there are two ways to write x in terms of {v1, v2, . . . , vn}: r1v1 + r2v2 + · · · + rnvn = x s1v1 + s2v2 + · · · + snvn = x. Subtracting the second equation from the first we obtain that (r1 − s1)v1 + ( r2 − s2)v2 + · · · + ( rn − sn)vn = x − x = 0. > 50 Lecture 6 The above equation is a linear combination of v1, v2, . . . , vn resulting in the zero vector 0.But we are assuming that the only way to write 0 in terms of {v1, v2, . . . , vn} is if all the coefficients are
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
zero. Therefore, we must have r1 − s1 = 0, r2 − s2 = 0 , . . . , r n − sn = 0, or equivalently that r1 = s1, r 2 = s2, . . . , r n = sn. Therefore, the linear combinations r1v1 + r2v2 + · · · + rnvn = x s1v1 + s2v2 + · · · + snvn = x are actually the same. Therefore, each x ∈ span {v1, v2, . . . , vn} can be written uniquely in terms of {v1, v2, . . . , vn}, and thus {v1, v2, . . . , vn} is a linearly independent set. Because of Theorem 6.3 , an alternative definition of linear independence of a set of vectors {v1, v2, . . . , vn} is that the vector equation x1v1 + x2v2 + · · · + xnvn = 0 has only the trivial solution, i.e., the solution x1 = x2 = · · · = xn = 0. Thus, if {v1, v2, . . . , vn} is linearly dependent, then there exist scalars x1, x 2, . . . , x n not all zero such that x1v1 + x2v2 + · · · + xnvn = 0. Hence, if we suppose for instance that xn 6 = 0 then we can write vn in terms of the vectors v1, . . . , vn−1 as follows: vn = − x1 > xn v1 − x2 > xn v2 − · · · − xn−1 > xn vn−1. In other words, vn ∈ span {v1, v2, . . . , vn−1}.According to Theorem 6.3 , the set of vectors {v1, v2, . . . , vn} is linearly independent if the equation x1v1 + x2v2
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
+ · · · + xnvn = 0 (6.1) has only the trivial solution. Now, the vector equation ( 6.1 ) is a homogeneous linear system of equations with coefficient matrix A = [v1 v2 · · · vn ] . Therefore, the set {v1, v2, . . . , vn} is linearly independent if and only if the the homogeneous system Ax = 0 has only the trivial solution. But the homogeneous system Ax = 0 has only the trivial solution if there are no free parameters in its solution set. We therefore have the following. Theorem 6.4: The set {v1, v2, . . . , vn} is linearly independent if and only if the the rank of A is r = n, that is, if the number of leading entries r in the REF (or RREF) of A is exactly n. Example 6.5. Are the vectors below linearly independent? v1 = 015 , v2 = 128 , v3 = 4 −10 > 51 Linear Independence Solution. Let A be the matrix A = [v1 v2 v3 ] = 0 1 41 2 −15 8 0 Performing elementary row operations we obtain A ∼ 1 2 −10 1 40 0 13 Clearly, r = rank( A) = 3, which is equal to the number of vectors n = 3. Therefore, {v1, v2, v3} is linearly independent. Example 6.6. Are the vectors below linearly independent? v1 = 123 , v2 = 456 , v3 = 210 Solution. Let A be the matrix A = [v1 v2 v3 ] = 1 4 22 5 13 6 0 Performing elementary row operations we obtain A ∼ 1 4 20 −3 −30 0 0
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
Clearly, r = rank( A) = 2, which is not equal to the number of vectors, n = 3. Therefore, {v1, v2, v3} is linearly dependent. We will find a nontrivial linear combination of the vectors v1, v2, v3 that gives the zero vector 0. The REF of A = [ v1 v2 v3] is A ∼ 1 4 20 −3 −30 0 0 Since r = 2, the solution set of the linear system Ax = 0 has d = n − r = 1 free parameter. Using back substitution on the REF above, we find that the general solution of Ax = 0 written in parametric form is x = t 2 −11 The vector v = 2 −11 > 52 Lecture 6 spans the solution set of the system Ax = 0. Choosing for instance t = 2 we obtain the solution x = t 2 −11 = 4 −22 . Therefore, 4v1 − 2v2 + 2 v3 = 0 is a non-trivial linear combination of v1, v2, v3 that gives the zero vector 0. And, for instance, v3 = −2v1 + v2 that is, v3 ∈ span {v1, v2}. Below we record some simple observations on the linear independence of simple sets: • A set consisting of a single non-zero vector {v1} is linearly independent. Indeed, if v1 is non-zero then tv1 = 0 is true if and only if t = 0. • A set consisting of two non-zero vectors {v1, v2} is linearly independent if and only if neither of the vectors is a multiple of the other. For example, if v2 = tv1 then tv1 − v2 = 0 is a non-trivial linear combination of v1, v2 giving the zero vector 0.
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
• Any set {v1, v2, . . . , vp} containing the zero vector, say that vp = 0, is linearly depen-dent. For example, the linear combination 0v1 + 0 v2 + · · · + 0 vp−1 + 2 vp = 0 is a non-trivial linear combination giving the zero vector 0. # 6.2 The maximum size of a linearly independent set The next theorem puts a constraint on the maximum size of a linearly independent set in Rn. Theorem 6.7: Let {v1, v2, . . . , vp} be a set of vectors in Rn. If p > n then v1, v2, . . . , vp are linearly dependent. Equivalently, if the vectors v1, v2, . . . , vp in Rn are linearly inde-pendent then p ≤ n. > 53 Linear Independence Proof. Let A = [v1 v2 · · · vp ]. Thus, A is a n × p matrix. Since A has n rows, the maximum rank of A is n, that is r ≤ n. Therefore, the number of free parameters d = p − r is always positive because p > n ≥ r. Thus, the homogeneous system Ax = 0 has non-trivial solutions. In other words, there is some non-zero vector x ∈ Rp such that Ax = x1v1 + x2v2 + · · · + xpvp = 0 and therefore {v1, v2, . . . , vp} is linearly dependent. Theorem 6.7 will be used when we discuss the notion of the dimension of a space. Although we have not discussed the meaning of dimension, the above theorem says that in n-dimensional space Rn, a set of vectors {v1, v2, . . . , vp} consisting of more than n vectors is automatically linearly dependent. Example 6.8. Are the vectors
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
below linearly independent? v1 = 830 −2 , v2 = 411 −46 , v3 = 2011 , v4 = 3 −9 −53 , v5 = 0 −2 −77 . Solution. The vectors v1, v2, v3, v4, v5 are in R4. Therefore, by Theorem 6.7 , the set {v1, . . . , v5} is linearly dependent. To see this explicitly, let A = [v1 v2 v3 v4 v5 ]. Then A ∼ 1 0 0 0 −10 1 0 0 10 0 1 0 00 0 0 1 −2 One solution to the linear system Ax = 0 is x = ( −1, 1, 0, −2, −1) and therefore (−1) v1 + (1) v2 + (0) v3 + ( −2) v4 + ( −1) v5 = 0 Example 6.9. Suppose that the set {v1, v2, v3, v4} is linearly independent. Show that the set {v1, v2, v3} is also linearly independent. Solution. We must argue that if there exists scalars x1, x 2, x 3 such that x1v1 + x2v2 + x3v3 = 0 then necessarily x1, x 2, x 3 are all zero. Suppose then that there exists scalars x1, x 2, x 3 such that x1v1 + x2v2 + x3v3 = 0. Then clearly it holds that x1v1 + x2v2 + x3v3 + 0 v4 = 0. But the set {v1, v2, v3, v4} is linearly independent, and therefore, it is necessary that x1, x 2, x 3 are all zero. This proves that v1, v2, v3 are also linearly independent. > 54 Lecture 6 The previous example can be generalized as follows: If {v1, v2, . . . , vd} is linearly inde-pendent then any (non-empty) subset of the set {v1, v2, . . .
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
, vd} is also linearly independent. After this lecture you should know the following: • the definition of linear independence and be able to explain it to a colleague • how to test if a given set of vectors are linearly independent (Theorem 6.4 ) • the relationship between the linear independence of {v1, v2, . . . , vp} and the solution set of the homogeneous system Ax = 0, where A = [v1 v2 · · · vp ] • that in Rn, any set of vectors consisting of more than n vectors is automatically linearly dependent (Theorem 6.7 ) > 55 Linear Independence > 56 Lecture 7 # Lecture 7 Introduction to Linear Mappings # 7.1 Vector mappings By a vector mapping we mean simply a function T : Rn → Rm. The domain of T is Rn and the co-domain of T is Rm. The case n = m is allowed of course. In engineering or physics, the domain is sometimes called the input space and the co-domain is called the output space . Using this terminology, the points x in the domain are called the inputs and the points T(x) produced by the mapping are called the outputs . Definition 7.1: The vector b ∈ Rm is in the range of T, or in the image of T, if there exists some x ∈ Rn such that T(x) = b.In other words, b is in the range of T if there is an input x in the domain of T that outputs b = T(x). In general, not every point in the co-domain of T is in the range of T. For example, consider the vector mapping T : R2 → R2 defined as T(x) = [x21 sin( x2) − cos( x21 − 1) x21
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
+ x22 + 1 ] . The vector b = (3 , −1) is not in the range of T because the second component of T(x) is positive. On the other hand, b = ( −1, 2) is in the range of T because T ([ 10 ]) = [12 sin(0) − cos(1 2 − 1) 12 + 0 2 + 1 ] = [−12 ] = b. Hence, a corresponding input for this particular b is x = (1 , 0). In Figure 7.1 we illustrate the general setup of how the domain, co-domain, and range of a mapping are related. Acrucial idea is that the range of T may not equal the co-domain. 57 Introduction to Linear Mappings > bb x T(x) > Range Rn, domain Rm, Co-domain Figure 7.1: The domain, co-domain, and range of a mapping. # 7.2 Linear mappings For our purposes, vector mappings T : Rn → Rm can be organized into two categories: (1) linear mappings and (2) nonlinear mappings. Definition 7.2: The vector mapping T : Rn → Rm is said to be linear if the following conditions hold: • For any u, v ∈ Rn, it holds that T(u + v) = T(u) + T(v). • For any u ∈ Rn and any scalar c, it holds that T(cu) = cT(u). If T is not linear then it is said to be nonlinear .As an example, the mapping T(x) = [x21 sin( x2) − cos( x21 − 1) x21 + x22 + 1 ] is nonlinear . To see this, previously we computed that T ([ 10 ]) = [−12 ] . > 58 Lecture 7 If T were linear then by property (2) of Definition 7.2 the following must hold: T ([ 30 ]) = T ( 3 [10 ])
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
= 3 T ([ 10 ]) = 3 [−12 ] = [−36 ] . However, T ([ 30 ]) = [32 sin(0) − cos(3 2 − 1) 32 + 0 2 + 1 ] = [− cos(8) 10 ] 6 = [−36 ] . Example 7.3. Is the vector mapping T : R2 → R3 linear? T ([ x1 x2 ]) = 2x1 − x2 x1 + x2 −x1 − 3x2 Solution. We must verify that the two conditions in Definition 7.2 hold. For the first condi-tion, take arbitrary vectors u = ( u1, u 2) and v = ( v1, v 2). We compute: T (u + v) = T ([ u1 + v1 u2 + v2 ]) = 2( u1 + v1) − (u2 + v2)(u1 + v1) + ( u2 + v2) −(u1 + v1) − 3( u2 + v2) = 2u1 + 2 v1 − u2 − v2 u1 + v1 + u2 + v2 −u1 − v1 − 3u2 − 3v2 = 2u1 − u2 + 2 v1 − v2 u1 + u2 + v1 + v2 −u1 − 3u2 − v1 − 3v2 = 2u1 − u2 u1 + u2 −u1 − 3u2 + 2v1 − v2 v1 + v2 −v1 − 3v2 = T(u) + T(v) > 59 Introduction to Linear Mappings Therefore, for arbitrary u, v ∈ R2, it holds that T(u + v) = T(u) + T(v). To prove the second condition, let c ∈ R be an arbitrary scalar. Then: T(cu) = T ([ cu 1 cu 2 ]) = 2( cu 1) − (cu 2)(cu 1) + ( cu 2) −(cu 1) − 3( cu 2) = c(2 u1 − u2) c(u1 + u2)
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
c(−u1 − 3u2) = c 2u1 − u2 u1 + u2 −u1 − 3u2 = cT(u)Therefore, both conditions of Definition 7.2 hold, and thus T is a linear map. Example 7.4. Let α ≥ 0 and define the mapping T : Rn → Rn by the formula T(x) = αx.If 0 ≤ α ≤ 1 then T is called a contraction and if α > 1 then T is called a dilation . In either case, show that T is a linear mapping. Solution. Let u and v be arbitrary. Then T(u + v) = α(u + v) = αu + αv = T(u) + T(v). This shows that condition (1) in Definition 7.2 holds. To show that the second condition holds, let c is any number. Then T(cx) = α(cx) = αc x = c(αx) = cT(x). Therefore, both conditions of Definition 7.2 hold, and thus T is a linear mapping. To see a particular example, consider the case α = 1 > 2 and n = 3. Then, T(x) = 1 > 2 x = > 1 > 2 x11 > 2 x21 > 2 x3 . > 60 Lecture 7 # 7.3 Matrix mappings Given a matrix A ∈ Rm×n and a vector x ∈ Rn, in Lecture 4 we defined matrix-vector multiplication between A and x as an operation that produces a new output vector Ax ∈ Rm.We discussed that we could interpret A as a mapping that takes the input vector x ∈ Rn and produces the output vector Ax ∈ Rm. We can therefore associate to each matrix A avector mapping T : Rn → Rm defined by T(x) = Ax . Such a mapping T will be called a matrix mapping corresponding to A and when con-venient we
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
will use the notation TA to indicate that TA is associated to A. We proved in Lecture 4 (Theorem 4.3 ), that for any u, v ∈ Rn, and scalar c, matrix-vector multiplication satisfies the properties: 1. A(u + v) = Au + Av 2. A(cu) = cAu .The following theorem is therefore immediate. Theorem 7.5: To a given matrix A ∈ Rm×n associate the mapping T : Rn → Rm defined by the formula T(x) = Ax . Then T is a linear mapping. Example 7.6. Is the vector mapping T : R2 → R3 linear? T ([ x1 x2 ]) = 2x1 − x2 x1 + x2 −x1 − 3x2 Solution. In Example 7.3 we showed that T is a linear mapping using Definition 7.2 . Alter-natively, we observe that T is a mapping defined using matrix-vector multiplication because T ([ x1 x2 ]) = 2x1 − x2 x1 + x2 −x1 − 3x2 = 2 −11 1 −1 −3 [x1 x2 ] Therefore, T is a matrix mapping corresponding to the matrix A = 2 −11 1 −1 −3 that is, T(x) = Ax . By Theorem 7.5 , T is a linear mapping. > 61 Introduction to Linear Mappings Let T : Rn → Rm be a vector mapping. Recall that b ∈ Rm is in the range of T if there is some input vector x ∈ Rn such that T(x) = b. In this case, we say that b is the image of x under T or that x is mapped to b under T. If T is a nonlinear mapping, finding a specific vector x such that T(x) = b is generally a difficult problem. However, if T(x) = Ax is a matrix mapping, then it
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
is clear that finding such a vector x is equivalent to solving the matrix equation Ax = b. In summary, we have the following theorem. Theorem 7.7: Let T : Rn → Rm be a matrix mapping corresponding to A, that is, T(x) = Ax . Then b ∈ Rm is in the range of T if and only if the matrix equation Ax = b has a solution. Let TA : Rn → Rm be a matrix mapping, that is, TA(x) = Ax . We proved that the output vector Ax is a linear combination of the columns of A where the coefficients in the linear combination are the components of x. Explicitly, if A = [ v1 v2 · · · vn] and the components of x = ( x1, x 2, . . . , x n) then Ax = x1v1 + x2v2 + · · · + xnvn. Therefore, the range of the matrix mapping TA(x) = Ax is Range( TA) = span {v1, v2, . . . , vn}. In words, the range of a matrix mapping is the span of its columns. Therefore, if v1, v2, . . . , vn span all of Rm then every vector b ∈ Rm is in the range of TA. Example 7.8. Let A = 1 3 −41 5 2 −3 −7 −6 , b = −2412 . Is the vector b in the range of the matrix mapping T(x) = Ax ? Solution. From Theorem 7.7 , b is in the range of T if and only if the the matrix equation Ax = b has a solution. To solve the system Ax = b, row reduce the augmented matrix [A b]: 1 3 −4 −21 5 2 4 −3 −7
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
−6 12 ∼ 1 3 −4 −20 1 3 30 0 −12 0 The system is consistent and the (unique) solution is x = ( −11 , 3, 0). Therefore, b is in the range of T. # 7.4 Examples If T : Rn → Rm is a linear mapping, then for any vectors v1, v2, . . . , vp and scalars c1, c 2, . . . , c p, it holds that T(c1v1 + c2v2 + · · · + cpvd) = c1T(v1) + c2T(v2) + · · · + cdT(vp). (⋆) > 62 Lecture 7 Therefore, if all you know are the values T(v1), T(v2), . . . , T(vp) and T is linear, then you can compute T(v) for every v ∈ span {v1, v2, . . . , vp}. Example 7.9. Let T : R2 → R2 be a linear transformation that maps u to T(u) = (3 , 4) and maps v to T(v) = ( −2, 5). Find T(2 u + 3 v). Solution. Because T is a linear mapping we have that T(2 u + 3 v) = T(2 u) + T(3 v) = 2 T(u) + 3 T(v). We know that T(u) = (3 , 4) and T(v) = ( −2, 5). Therefore, T(2 u + 3 v) = 2 T(u) + 3 T(v) = 2 [34 ] + 3 [−25 ] = [ 023 ] . Example 7.10. (Rotations) Let Tθ : R2 → R2 be the mapping on the 2D plane that rotates every v ∈ R2 by an angle θ. Write down a formula for Tθ and show that Tθ is a linear mapping. > b αθ bb v Tθ(v) Solution. If v = (cos( α), sin( α)) then Tθ(v) = [cos( α +
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
θ)sin( α + θ) ] . Then from the angle sum trigonometric identities: Tθ(v) = [cos( α + θ)sin( α + θ) ] = [cos( α) cos( θ) − sin( α) sin( θ)cos( α) sin( θ) + sin( α) cos( θ) ] But Tθ(v) = [cos( α) cos( θ) − sin( α) sin( θ)cos( α) sin( θ) + sin( α) cos( θ) ] = [cos( θ) − sin( θ)sin( θ) cos( θ) ] [ cos( α)sin( α) ]︸ ︷︷ ︸ > v . > 63 Introduction to Linear Mappings If we scale v by any c > 0 then performing the same computation as above we obtain that Tθ(cv) = cT(v). Therefore, Tθ is a matrix mapping with corresponding matrix A = [cos( θ) − sin( θ)sin( θ) cos( θ) ] . Thus, Tθ is a linear mapping. Example 7.11. (Projections) Let T : R3 → R2 be the vector mapping T x1 x2 x3 = x1 x2 0 . Show that T is a linear mapping and describe the range of T. Solution. First notice that T x1 x2 x3 = x1 x2 0 = 1 0 00 1 00 0 0 x1 x2 x3 . Thus, T is a matrix mapping corresponding to the matrix A = 1 0 00 1 00 0 0 . Therefore, T is a linear mapping. Geometrically, T takes the vector x and projects it to the (x1, x 2) plane, see Figure 7.2 . What is the range of T? The range of T consists of all vectors in R3 of the form b = ts 0 where the numbers t and s are arbitrary. For each b in the range of T, there are infinitely many
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
x’s such that T(x) = b. > b > bb x = x1 x2 x3 T(x) = x1 x2 0 Figure 7.2: Projection onto the ( x1, x 2) plane > 64 Lecture 7 After this lecture you should know the following: • what a vector mapping is • what the range of a vector mapping is • that the co-domain and range of a vector mapping are generally not the same • what a linear mapping is and how to check when a given mapping is linear • what a matrix mapping is and that they are linear mappings • how to determine if a vector b is in the range of a matrix mapping • the formula for a rotation in R2 by an angle θ > 65 Introduction to Linear Mappings > 66 Lecture 8 # Lecture 8 Onto and One-to-One Mappings, and the Matrix of a Linear Mapping # 8.1 Onto Mappings We have seen through examples that the range of a vector mapping (linear or nonlinear) is not always the entire co-domain. For example, if TA(x) = Ax is a matrix mapping and b is such that the equation Ax = b has no solutions then the range of T does not contain b and thus the range is not the whole co-domain. Definition 8.1: A vector mapping T : Rn → Rm is said to be onto if for each b ∈ Rm there is at least one x ∈ Rn such that T(x) = b.For a matrix mapping TA(x) = Ax , the range of TA is the span of the columns of A.Therefore: Theorem 8.2: Let TA : Rn → Rm be the matrix mapping TA(x) = Ax , where A ∈ Mm×n. Then TA is onto if
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
and only if the columns of A span all of Rm.Combining Theorem 4.11 and Theorem 8.2 we have: Theorem 8.3: Let TA : Rn → Rm be the matrix mapping TA(x) = Ax , where A ∈ Rm×n.Then TA is onto if and only if r = rank( A) = m. Example 8.4. Let T : R3 → R3 be the matrix mapping with corresponding matrix A = 1 2 −1 −3 −4 25 2 3 Is TA onto? 67 Onto, One-to-One, and Standard Matrix Solution. The rref (A) is 1 2 −1 −3 −4 25 2 3 ∼ 1 0 00 1 00 0 1 Therefore, r = rank( A) = 3. The dimension of the co-domain is m = 3 and therefore TA is onto. Therefore, the columns of A span all of R3, that is, every b ∈ R3 can be written as a linear combination of the columns of A:span 1 −32 , 2 −42 , −123 = R3 Example 8.5. Let TA : R4 → R3 be the matrix mapping with corresponding matrix A = 1 2 −1 4 −1 4 1 82 0 −2 0 Is TA onto? Solution. The rref (A) is A = 1 2 −1 4 −1 4 1 82 0 −2 0 ∼ 1 0 −1 00 1 0 20 0 0 0 Therefore, r = rank( A) = 2. The dimension of the co-domain is m = 3 and therefore TA is not onto. Notice that v3 = −v1 and v4 = 2 v2. Thus, v3 and v4 are already in the span of the columns v1, v2. Therefore, span {v1, v2, v3, v4} = span {v1, v2} 6 = R3.
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
Below is a theorem which places restrictions on the size of the domain of an onto mapping. Theorem 8.6: Suppose that TA : Rn → Rm is a matrix mapping corresponding to A ∈ Mm×n. If TA is onto then m ≤ n. Proof. If TA is onto then the rref (A) has r = m leading 1’s. Therefore, A has at least m columns. The number of columns of A is n. Therefore, m ≤ n. An equivalent way of stating Theorem 8.6 is the following. > 68 Lecture 8 Corollary 8.7: If TA : Rn → Rm is a matrix mapping corresponding to A ∈ Mm×n and n < m then TA cannot be onto. Intuitively, if the domain Rn is “smaller” than the co-domain Rm and TA : Rn → Rm is linear then TA cannot be onto. For example, a matrix mapping TA : R → R2 cannot be onto. Linearity plays a key role in this. In fact, there exists a continuous (nonlinear) function f : R → R2 whose range is a square! In this case, the domain is 1-dimensional and the range is 2-dimensional. This situation cannot happen when the mapping is linear. Example 8.8. Let TA : R2 → R3 be the matrix mapping with corresponding matrix A = 1 4 −3 22 1 Is TA onto? Solution. TA is onto because the domain is R2 and the co-domain is R3. Intuitively, two vectors are not enough to span R3. Geometrically, two vectors in R3 span a 2D plane going through the origin. The vectors not on the plane span {v1, v2} are not in the range of TA. # 8.2 One-to-One Mappings Given a linear mapping T : Rn → Rm, the question of whether b ∈ Rm is
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
in the range of T is an existence question. Indeed, if b ∈ Range( T) then there exists a x ∈ Rm such that T(x) = b. We now want to look at the problem of whether x is unique . That is, does there exist a distinct y such that T(y) = b. Definition 8.9: A vector mapping T : Rn → Rm is said to be one-to-one if for each b ∈ Range( T) there exists only one x ∈ Rn such that T(x) = b.When T is a linear mapping, we have all the tools necessary to give a complete description of when T is one-to-one. To do this, we use the fact that if T : Rn → Rm is linear then T(0) = 0. Here is one proof: T(0) = T(x − x) = T(x) − T(x) = 0. Theorem 8.10: Let T : Rn → Rm be linear. Then T is one-to-one if and only if T(x) = 0 implies that x = 0.If TA : Rn → Rm is a matrix mapping then according to Theorem 8.10 , TA is one-to-one if and only if the only solution to Ax = 0 is x = 0. We gather these facts in the following theorem. > 69 Onto, One-to-One, and Standard Matrix Theorem 8.11: Let TA : Rn → Rm be a matrix mapping, where A = [ v1 v2 · · · vn] ∈ Mm×n. The following statements are equivalent: 1. TA is one-to-one. 2. The rank of A is r = rank( A) = n.3. The columns v1, v2, . . . , vn are linearly independent. Example 8.12. Let TA : R4 → R3 be the matrix mapping with matrix A = 3 −2 6 4 −1 0 −2 −12
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
−2 0 2 . Is TA one-to-one? Solution. By Theorem 8.11 , TA is one-to-one if and only if the columns of A are linearly independent. The columns of A lie in R3 and there are n = 4 columns. From Lecture 6, we know then that the columns are not linearly independent. Therefore, TA is not one-to-one. Alternatively, A will have rank at most r = 3 (why?). Therefore, the solution set to Ax = 0 will have at least one parameter, and thus there exists infinitely many solutions to Ax = 0.Intuitively, because R4 is “larger” than R3, the linear mapping TA will have to project R4 onto R3 and thus infinitely many vectors in R4 will be mapped to the same vector in R3. Example 8.13. Let TA : R2 → R3 be the matrix mapping with matrix A = 1 03 −12 0 Is TA one-to-one? Solution. By inspection, we see that the columns of A are linearly independent. Therefore, TA is one-to-one. Alternatively, one can compute that rref (A) = 1 00 10 0 Therefore, r = rank( A) = 2, which is equal to the number columns of A. > 70 Lecture 8 # 8.3 Standard Matrix of a Linear Mapping We have shown that all matrix mappings TA are linear mappings. We now want to answer the reverse question: Are all linear mappings matrix mappings in disguise? If T : Rn → Rm is a linear mapping, then to show that T is in fact a matrix mapping we must show that there is some matrix A ∈ Mm×n such that T(x) = Ax . To that end, introduce the standard unit vectors e 1, e2, . . . , en in Rn: e1 = 100...0
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
, e2 = 010...0 , e3 = 001...0 , · · · , en = 000...1 . Every x ∈ Rn is in span {e1, e2, . . . , en} because: x = x1 x2 ... xn = x1 10...0 + x2 01...0 + · · · + xn 00...1 = x1e1 + x2e2 + · · · + xnen With this notation we prove the following. Theorem 8.14: Every linear mapping is a matrix mapping. Proof. Let T : Rn → Rm be a linear mapping. Let v1 = T(e1), v2 = T(e2), . . . , vn = T(en). The co-domain of T is Rm, and thus vi ∈ Rm. Now, for arbitrary x ∈ Rn we can write x = x1e1 + x2e2 + · · · + xnen. Then by linearity of T, we have T(x) = T(x1e1 + x2e2 + · · · + xnen)= x1T(e1) + x2T(e2) + · · · + xnT(en)= x1v1 + x2v2 + · · · + xnvn = [v1 v2 · · · vn ] x. Define the matrix A ∈ Mm×n by A = [v1 v2 · · · vn ]. Then our computation above shows that T(x) = x1v1 + x2v2 + · · · + xnvn = Ax . Therefore, T is a matrix mapping with the matrix A ∈ Mm×n. > 71 Onto, One-to-One, and Standard Matrix If T : Rn → Rm is a linear mapping, the matrix A = [T(e1) T(e2) · · · T(en)] is called the standard matrix of T. In words, the columns of A are the images of the standard unit vectors e1, e2, . . . , en under T. The punchline is
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
that if T is a linear mapping, then to derive properties of T we need only know the standard matrix A corresponding to T. Example 8.15. Let T : R2 → R2 be the linear mapping that rotates every vector by an angle θ. Use the standard unit vectors e1 = [10 ] and e2 = [01 ] in R2 to write down the matrix A ∈ R2×2 corresponding to T. θ > bb e1 Tθ(e1) > bb e2Tθ(e2) Solution. We have A = [T(e1) T(e2)] = [cos( θ) − sin( θ)sin( θ) cos( θ) ] Example 8.16. Let T : R3 → R3 be a dilation of factor k = 2. Find the standard matrix A of T. Solution. The mapping is T(x) = 2 x. Then T(e1) = 2 100 = 200 , T(e2) = 2 010 = 020 , T(e3) = 2 001 = 002 Therefore, A = [T(e1) T(e2) T(e3)] = 2 0 00 2 00 0 2 is the standard matrix of T. After this lecture you should know the following: > 72 Lecture 8 • the relationship between the range of a matrix mapping T(x) = Ax and the span of the columns of A • what it means for a mapping to be onto and one-to-one • how to verify if a linear mapping is onto and one-to-one • that all linear mappings are matrix mappings • what the standard unit vectors are • how to compute the standard matrix of a linear mapping > 73 Onto, One-to-One, and Standard Matrix > 74 Lecture 9 # Lecture 9 Matrix Algebra # 9.1 Sums of Matrices We begin with the definition of matrix addition. Definition 9.1: Given matrices A =
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
a11 a12 · · · a1n a21 a22 · · · a2n ... ... ... ... am1 am2 · · · amn , B = b11 b12 · · · b1n b21 b22 · · · b2n ... ... ... ... bm1 bm2 · · · bmn , both of the same dimension m × n, the sum A + B is defined as A + B = a11 + b11 a12 + b12 · · · a1n + b1n a21 + b21 a22 + b22 · · · a2n + b2n ... ... ... ... am1 + bm1 am2 + bm2 · · · amn + bmn . Next is the definition of scalar-matrix multiplication. Definition 9.2: For a scalar α we define αA by αA = α a11 a12 · · · a1n a21 a22 · · · a2n ... ... ... ... am1 am2 · · · amn = αa 11 αa 12 · · · αa 1n αa 21 αa 22 · · · αa 2n ... ... ... ... αa m1 αa m2 · · · αa mn . 75 Matrix Algebra Example 9.3. Given A and B below, find 3 A − 2B. A = 1 −2 50 −3 94 −6 7 , B = 5 0 −11 3 −5 1 −1 −9 0 Solution. We compute: 3A − 2B = 3 −6 15 0 −9 27 12 −18 21 − 10 0 −22 6 −10 2 −2 −18 0 = −7 −6 37 −6 1 25 14 0 21 Below are some basic algebraic properties of matrix addition/scalar multiplication. Theorem 9.4: Let A, B, C be matrices of the same size and let
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
α, β be scalars. Then (a) A + B = B + A (d) α(A + B) = αA + αB (b) ( A + B) + C = A + ( B + C) (e) ( α + β)A = αA + βA (c) A + 0 = A (f) α(βA) = ( αβ )A # 9.2 Matrix Multiplication Let TB : Rp → Rn and let TA : Rn → Rm be linear mappings. If x ∈ Rp then TB(x) ∈ Rn and thus we can apply TA to TB(x). The resulting vector TA(TB(x)) is in Rm. Hence, each x ∈ Rp can be mapped to a point in Rm, and because TB and TA are linear mappings the resulting mapping is also linear. This resulting mapping is called the composition of TA and TB, and is usually denoted by TA ◦ TB : Rp → Rm (see Figure 9.1 ). Hence, (TA ◦ TB)( x) = TA(TB(x)) . Because ( TA ◦ TB) : Rp → Rm is a linear mapping it has an associated standard matrix, which we denote for now by C. From Lecture 8, to compute the standard matrix of any linear mapping, we must compute the images of the standard unit vectors e1, e2, . . . , ep under the linear mapping. Now, for any x ∈ Rp, TA(TB(x)) = TA(Bx ) = A(Bx ). Applying this to x = ei for all i = 1 , 2, . . . , p , we obtain the standard matrix of TA ◦ TB: C = [A(Be 1) A(Be 2) · · · A(Be p)] . > 76 Lecture 9 > RpRn > Rm > xTB(x)TA(TB(x)) bbb > TBTA > (TA◦TB)( x) Figure 9.1: Illustration of the composition of two mappings. Now Be
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
1 is Be 1 = [b1 b2 · · · bp ] e1 = b1. And similarly Be i = bi for all i = 1 , 2, . . . , p . Therefore, C = [Ab 1 Ab 2 · · · Ab p ] is the standard matrix of TA ◦ TB. This computation motivates the following definition. Definition 9.5: For A ∈ Rm×n and B ∈ Rn×p, with B = [b1 b2 · · · bp ], we define the product AB by the formula AB = [Ab 1 Ab 2 · · · Ab p ] . The product AB is defined only when the number of columns of A equals the number of rows of B. The following diagram is useful for remembering this: (m × n) · (n × p) → m × p From our definition of AB , the standard matrix of the composite mapping TA ◦ TB is C = AB . In other words, composition of linear mappings corresponds to matrix multiplication. Example 9.6. For A and B below compute AB and BA . A = [ 1 2 −21 1 −3 ] , B = −4 2 4 −4 −1 −5 −3 3 −4 −4 −3 −1 > 77 Matrix Algebra Solution. First AB = [ Ab 1 Ab 2 Ab 3 Ab 4]: AB = [ 1 2 −21 1 −3 ] −4 2 4 −4 −1 −5 −3 3 −4 −4 −3 −1 = [ 27= [ 2 07 9= [ 2 0 47 9 10 = [ 2 0 4 47 9 10 2 ] On the other hand, BA is not defined! B has 4 columns and A has 2 rows. Example 9.7. For A and B below compute AB
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
and BA . A = −4 4 33 −3 −1 −2 −1 1 , B = −1 −1 0 −3 0 −2 −2 1 −2 Solution. First AB = [ Ab 1 Ab 2 Ab 3]: AB = −4 4 33 −3 −1 −2 −1 1 −1 −1 0 −3 0 −2 −2 1 −2 = −14 83= −14 78 −43 3= −14 7 −14 8 −4 83 3 0 > 78 Lecture 9 Next BA = [ Ba 1 Ba 2 Ba 3]: BA = −1 −1 0 −3 0 −2 −2 1 −2 −4 4 33 −3 −1 −2 −1 1 = 116 15 = 1 −116 −10 15 −9= 1 −1 −216 −10 −11 15 −9 −9 On the other hand: AB = −14 7 −14 8 −4 83 3 0 Therefore, in general AB 6 = BA , i.e., matrix multiplication is not commutative. An important matrix that arises frequently is the identity matrix I n ∈ Rn×n of size n: In = 1 0 0 · · · 00 1 0 · · · 0... ... ... · · · ...0 0 0 · · · 1 You should verify that for any A ∈ Rn×n it holds that AI n = InA = A. Below are some basic algebraic properties of matrix multiplication. Theorem 9.8: Let A, B, C be matrices, of appropriate dimensions, and let α be a scalar. Then (1) A(BC ) = ( AB )C (2) A(B + C) = AB + AC (3) (B + C)A = BA + CA (4) α(AB ) = ( αA)B = A(αB) (5) InA = AI n = A If A ∈
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
Rn×n is a square matrix, the kth power of A is Ak = AAA · · · A︸ ︷︷ ︸ > ktimes > 79 Matrix Algebra Example 9.9. Compute A3 if A = [ −2 31 0 ] . Solution. Compute A2: A2 = [ −2 31 0 ] [ −2 31 0 ] = [ 7 −6 −2 3 ] And then A3: A3 = A2A = [ 7 −6 −2 3 ] [ −2 31 0 ] = [ −20 21 7 −6 ] We could also do: A3 = AA 2 = [ −2 31 0 ] [ 7 −6 −2 3 ] = [ −20 21 7 −6 ] . # 9.3 Matrix Transpose We begin with the definition of the transpose of a matrix. Definition 9.10: Given a matrix A ∈ Rm×n, the transpose of A is the matrix AT whose ith column is the ith row of A.If A is m × n then AT is n × m. For example, if A = 0 −1 8 −7 −4 −4 6 −10 −9 69 5 −2 −3 5 −8 8 4 7 7 then AT = 0 −4 9 −8 −1 6 5 88 −10 −2 4 −7 −9 −3 7 −4 6 5 7 . Example 9.11. Compute ( AB )T and BT AT if A = [ −2 1 03 −1 −3 ] , B = −2 1 2 −1 −2 00 0 −1 . > 80 Lecture 9 Solution. Compute AB : AB = [ −2 1 03 −1 −3 ] −2 1 2 −1 −2 00 0 −1 = [ 3 −4 −4 −5 5 9 ] Next compute BT AT : BT AT = −2 −1 01 −2 02 0
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
−1 −2 31 −10 −3 = 3 −5 −4 5 −4 9 = ( AB )T The following theorem summarizes properties of the transpose. Theorem 9.12: Let A and B be matrices of appropriate sizes. The following hold: (1) (AT )T = A (2) (A + B)T = AT + BT (3) (αA)T = αAT (4) (AB )T = BT AT A consequence of property (4) is that (A1A2 . . . Ak)T = ATk ATk−1 · · · AT > 2 AT > 1 and as a special case (Ak)T = ( AT )k. Example 9.13. Let T : R2 → R2 be the linear mapping that first contracts vectors by a factor of k = 3 and then rotates by an angle θ. What is the standard matrix A of T? Solution. Let e1 = (1 , 0) and e2 = (0 , 1) denote the standard unit vectors in R2. From Lecture 8, the standard matrix of T is A = [T(e1) T(e2)]. Recall that the standard matrix of a rotation by θ is [cos( θ) − sin( θ)sin( θ) cos( θ) ] Contracting e1 by a factor of k = 3 results in ( 1 > 3 , 0) and then rotation by θ results in [1 > 3 cos( θ) > 1 > 3 sin( θ) ] = T(e1). > 81 Matrix Algebra Contracting e2 by a factor of k = 3 results in (0 , 1 > 3 ) and then rotation by θ results in [−1 > 3 sin( θ) > 1 > 3 cos( θ) ] = T(e2). Therefore, A = [T(e1) T(e2)] = [1 > 3 cos( θ) −1 > 3 sin( θ) > 1 > 3 sin( θ) 1 > 3 cos( θ) ] On the
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
other hand, the standard matrix corresponding to a contraction by a factor k = 1 > 3 is [1 > 3 00 1 > 3 ] Therefore, [cos( θ) − sin( θ)sin( θ) cos( θ) ]︸ ︷︷ ︸ > rotation [1 > 3 00 1 > 3 ]︸ ︷︷ ︸ > contraction = [1 > 3 cos( θ) −1 > 3 sin( θ) > 1 > 3 sin( θ) 1 > 3 cos( θ) ] = A After this lecture you should know the following: • know how to add and multiply matrices • that matrix multiplication corresponds to composition of linear mappings • the algebraic properties of matrix multiplication (Theorem 9.8 ) • how to compute the transpose of a matrix • the properties of matrix transposition (Theorem 9.12 ) > 82 Lecture 10 # Lecture 10 Invertible Matrices # 10.1 Inverse of a Matrix The inverse of a square matrix A ∈ Rn×n generalizes the notion of the reciprocal of a non-zero number a ∈ R. Formally speaking, the inverse of a non-zero number a ∈ R is the unique number c ∈ R such that ac = ca = 1. The inverse of a 6 = 0, usually denoted by a−1 = 1 > a , can be used to solve the equation ax = b: ax = b ⇒ a−1ax = a−1b ⇒ x = a−1b. This motivates the following definition. Definition 10.1: A matrix A ∈ Rn×n is called invertible if there exists a matrix C ∈ Rn×n such that AC = In and CA = In.If A is invertible then can it have more than one inverse? Suppose that there exists C1, C2 such that AC i = CiA = In. Then C2 = C2(AC 1) = ( C2A)C1 = InC1 = C1. Thus,
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
if A is invertible, it can have only one inverse. This motivates the following definition. Definition 10.2: If A is invertible then we denote the inverse of A by A−1. Thus, AA −1 = A−1A = In. Example 10.3. Given A and C below, show that C is the inverse of A. A = 1 −3 0 −1 2 −2 −2 6 1 , C = −14 −3 −6 −5 −1 −22 0 1 83 Invertible Matrices Solution. Compute AC : AC = 1 −3 0 −1 2 −2 −2 6 1 −14 −3 −6 −5 −1 −22 0 1 = 1 0 00 1 00 0 1 Compute CA : CA = −14 −3 −6 −5 −1 −22 0 1 1 −3 0 −1 2 −2 −2 6 1 = 1 0 00 1 00 0 1 Therefore, by definition C = A−1. Theorem 10.4: Let A ∈ Rn×n and suppose that A is invertible. Then for any b ∈ Rn the matrix equation Ax = b has a unique solution given by A−1b. Proof: Let b ∈ Rn be arbitrary. Then multiplying the equation Ax = b by A−1 from the left we obtain that A−1Ax = A−1b ⇒ Inx = A−1b ⇒ x = A−1b. Therefore, with x = A−1b we have that Ax = A(A−1b) = AA −1b = Inb = b and thus x = A−1b is a solution. If ˜ x is another solution of the equation, that is, A˜x = b,then multiplying both sides by A−1 we obtain that ˜ x = A−1b. Thus, x = ˜ x. Example 10.5. Use the result of Example 10.3 . to solve the linear system Ax = b if A =
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
1 −3 0 −1 2 −2 −2 6 1 , b = 1 −3 −1 . Solution. We showed in Example 10.3 that A−1 = −14 −3 −6 −5 −1 −22 0 1 . Therefore, the unique solution to the linear system Ax = b is A−1b = −14 −3 −6 −5 −1 −22 0 1 1 −3 −1 = 101 > 84 Lecture 10 Verify: 1 −3 0 −1 2 −2 −2 6 1 101 = 1 −3 −1 The following theorem summarizes the relationship between the matrix inverse and ma-trix multiplication and matrix transpose. Theorem 10.6: Let A and B be invertible matrices. Then: (1) The matrix A−1 is invertible and its inverse is A:(A−1)−1 = A. (2) The matrix AB is invertible and its inverse is B−1A−1:(AB )−1 = B−1A−1. (3) The matrix AT is invertible and its inverse is ( A−1)T :(AT )−1 = ( A−1)T . Proof: To prove (2) we compute (AB )( B−1A−1) = ABB −1A−1 = AI nA−1 = AA −1 = In. To prove (3) we compute AT (A−1)T = ( A−1A)T = ITn = In. # 10.2 Computing the Inverse of a Matrix If A ∈ Mn×n is invertible, how do we find A−1? Let A−1 = [c1 c2 · · · cn ] and we will find expressions for ci. First note that AA −1 = [Ac 1 Ac 2 · · · Ac n ]. On the other hand, we also have AA −1 = In = [e1 e2 · · · en ]. Therefore, we want to find c1, c2, . . . , cn such that [Ac 1 Ac 2 · · · Ac n ]︸ ︷︷ ︸ >
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
AA −1 = [e1 e2 · · · en ]︸ ︷︷ ︸ > In . To find ci we therefore need to solve the linear system Ax = ei. Here the image vector “ b”is ei. To find c1 we form the augmented matrix [A e1 ] and find its RREF: [A e1 ] ∼ [In c1 ] . > 85 Invertible Matrices We will need to do this for each c2, . . . , cn so we might as well form the combined augmented matrix [A e1 e2 · · · en ] and find the RREF all at once: [A e1 e2 · · · en ] ∼ [In c1 c2 · · · cn ] . In summary, to determine if A−1 exists and to simultaneously compute it, we compute the RREF of the augmented matrix [A In ] , that is, A augmented with the n × n identity matrix. If the RREF of A is In, that is [A In ] ∼ [In c1 c2 · · · cn ] then A−1 = [c1 c2 · · · cn ] . If the RREF of A is not In then A is not invertible. Example 10.7. Find the inverse of A = [ 1 3 −1 −2 ] if it exists. Solution. Form the augmented matrix [A I2 ] and row reduce: [A I2 ] = [ 1 3 1 0 −1 −2 0 1 ] Add rows R1 and R2: [ 1 3 1 0 −1 −2 0 1 ] R1+R2 −−−−→ [1 3 1 00 1 1 1 ] Perform the operation −3R2+R1 −−−−−→ : [1 3 1 00 1 1 1 ] −3R2+R1 −−−−−→ [1 0 −2 −30 1 1 1 ] Thus, rref (A) = I2, and therefore A is
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
invertible. The inverse is A−1 = [−2 −31 1 ] Verify: AA −1 = [ 1 3 −1 −2 ] [ −2 −31 1 ] = [1 00 1 ] . Example 10.8. Find the inverse of A = 1 0 31 1 0 −2 0 −7 if it exists. > 86 Lecture 10 Solution. Form the augmented matrix [A I3 ] and row reduce: 1 0 3 1 0 01 1 0 0 1 0 −2 0 −7 0 0 1 −R1+R2, 2R1+R2 −−−−−−−−−−→ 1 0 3 1 0 00 1 −3 −1 1 00 0 −1 2 0 1 −R3: 1 0 3 1 0 00 1 −3 −1 1 00 0 −1 2 0 1 −R3 −−→ 1 0 3 1 0 00 1 −3 −1 1 00 0 1 −2 0 −1 3R3 + R2 and −3R3 + R1: 1 0 3 1 0 00 1 −3 −1 1 00 0 1 −2 0 −1 3R3+R2, −3R3+R1 −−−−−−−−−−−→ 1 0 0 7 0 30 1 0 −7 1 −30 0 1 −2 0 −1 Therefore, rref (A) = I3, and therefore A is invertible. The inverse is A−1 = 7 0 3 −7 1 −3 −2 0 −1 Verify: AA −1 = 1 0 31 1 0 −2 0 −7 7 0 3 −7 1 −3 −2 0 −1 = 1 0 00 1 00 0 1 Example 10.9. Find the inverse of A = 1 0 11 1 −2 −2 0 −2 if it exists. Solution. Form the augmented matrix [A I3 ] and row reduce: 1 0 1 1 0 01 1 −2 0 1 0 −2 0 −2 0
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
0 1 −R1+R2, 2R1+R2 −−−−−−−−−−→ 1 0 1 1 0 00 1 −3 −1 1 00 0 0 2 0 1 We need not go further since the rref (A) is not I3 (rank( A) = 2 ). Therefore, A is not invertible. # 10.3 Invertible Linear Mappings Let TA : Rn → Rn be a matrix mapping with standard matrix A and suppose that A is invertible. Let TA−1 : Rn → Rn be the matrix mapping with standard matrix A−1. Then the standard matrix of the composite mapping TA−1 ◦ TA : Rn → Rn is A−1A = In. > 87 Invertible Matrices Therefore, ( TA−1 ◦ TA)( x) = Inx = x. Let’s unravel ( TA−1 ◦ TA)( x) to see this: (TA−1 ◦ TA)( x) = TA−1 (TA(x)) = TA−1 (Ax ) = A−1Ax = x. Similarly, the standard matrix of ( TA ◦ TA−1 ) is also In. Intuitively, the linear mapping TA−1 undoes what TA does, and conversely. Moreover, since Ax = b always has a solution, TA is onto. And, because the solution to Ax = b is unique, TA is one-to-one. The following theorem summarizes equivalent conditions for matrix invertibility. Theorem 10.10: Let A ∈ Rn×n. The following statements are equivalent: (a) A is invertible. (b) A is row equivalent to In, that is, rref (A) = In. (c) The equation Ax = 0 has only the trivial solution. (d) The linear transformation TA(x) = Ax is one-to-one. (e) The linear transformation TA(x) = Ax is onto. (f) The matrix equation Ax = b is always solvable. (g) The columns of A span Rn. (h) The columns of A are linearly independent. (i) AT is invertible. Proof: This is a summary of all the statements we have proved about
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
matrices and matrix mappings specialized to the case of square matrices A ∈ Rn×n. Note that for non-square matrices, one-to-one does not imply ontoness, and conversely. Example 10.11. Without doing any arithmetic, write down the inverse of the dilation matrix A = [3 00 5 ] . Example 10.12. Without doing any arithmetic, write down the inverse of the rotation matrix A = [cos( θ) − sin( θ)sin( θ) cos( θ) ] . After this lecture you should know the following: • how to compute the inverse of a matrix • properties of matrix inversion and matrix multiplication • relate invertibility of a matrix with properties of the associated linear mapping (1-1, onto) • the characterizations of invertible matrices Theorem 10.10 > 88 Lecture 11 # Lecture 11 Determinants # 11.1 Determinants of 2 × 2 and 3 × 3 Matrices Consider a general 2 × 2 linear system a11 x1 + a12 x2 = b1 a21 x1 + a22 x2 = b2. Using elementary row operations, it can be shown that the solution is x1 = b1a22 − b2a12 a11 a22 − a12 a21 , x2 = b2a11 − b1a21 a11 a22 − a12 a21 , provided that a11 a22 − a12 a21 6 = 0. Notice the denominator is the same in both expressions. The number a11 a22 − a12 a21 then completely characterizes when a 2 × 2 linear system has a unique solution. This motivates the following definition. Definition 11.1: Given a 2 × 2 matrix A = [a11 a12 a21 a22 ] we define the determinant of A as det A = det [a11 a12 a21 a22 ] = a11 a22 − a12 a21 . An alternative notation for det A is using vertical bars: det [a11 a12 a21 a22 ] = ∣∣∣∣a11 a12 a21
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
a22 ∣∣∣∣ . 89 Determinants Example 11.2. Compute the determinant of A.(i) A = [3 −18 2 ] (ii) A = [ 3 1 −6 −2 ] (iii) A = [−110 0568 0 ] Solution. For (i): det( A) = ∣∣∣∣3 −18 2 ∣∣∣∣ = (3)(2) − (8)( −1) = 14 For (ii): det( A) = ∣∣∣∣ 3 1 −6 −2 ∣∣∣∣ = (3)( −2) − (−6)(1) = 0 For (iii): det( A) = ∣∣∣∣−110 0568 0 ∣∣∣∣ = ( −110)(0) − (568)(0) = 0 As in the 2 × 2 case, the solution of a 3 × 3 linear system Ax = b can be shown to be x1 = Numerator 1 D , x 2 = Numerator 2 D , x 3 = Numerator 3 D where D = a11 (a22 a33 − a23 a32 ) − a12 (a21 a33 − a23 a31 ) + a13 (a21 a32 − a22 a31 ). Notice that the terms of D in the parenthesis are determinants of 2 × 2 submatrices of A: D = a11 (a22 a33 − a23 a32 ︸ ︷︷ ︸∣∣∣∣∣∣ a22 a23 a32 a33 > ∣∣∣∣∣∣ ) − a12 (a21 a33 − a23 a31 ︸ ︷︷ ︸∣∣∣∣∣∣ a21 a23 a31 a33 > ∣∣∣∣∣∣ ) + a13 (a21 a32 − a22 a31 ︸ ︷︷ ︸∣∣∣∣∣∣ a21 a22 a31 a32 > ∣∣∣∣∣∣ ). Let A11 = [a22 a23 a32 a33 ] , A12 = [a21 a23 a31 a33 ] , and A13 = [a21 a22 a31 a32 ] . Then we can write D = a11 det( A11 ) − a12 det( A12 ) + a13 det( A13 ). The matrix A11 = [a22 a23 a32 a33 ] is obtained from A by deleting the 1st row and the 1st column: A = a11 a12 a13 a21 a22
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
a23 a31 a32 a33 −→ A11 = [a22 a23 a32 a33 ] . 90 Lecture 11 Similarly, the matrix A12 = [a21 a23 a31 a33 ] is obtained from A by deleting the 1st row and the 2nd column: A = a11 a12 a13 a21 a22 a23 a31 a32 a33 −→ A12 = [a21 a23 a31 a33 ] . Finally, the matrix A13 = [a21 a22 a31 a32 ] is obtained from A by deleting the 1st row and the 3rd column: A = a11 a12 a13 a21 a22 a23 a31 a32 a33 −→ [a21 a22 a31 a32 ] . Notice also that the sign in front of the coefficients a11 , a12 , and a13 , alternate. This motivates the following definition. Definition 11.3: Let A be a 3 × 3 matrix. Let Ajk be the 2 × 2 matrix obtained from A by deleting the jth row and kth column. Define the cofactor of ajk to be the number Cjk = ( −1) j+k det Ajk . Define the determinant of A to be det A = a11 C11 + a12 C12 + a13 C13 . This definition of the determinant is called the expansion of the determinant along the first row . In the cofactor Cjk = ( −1) j+k det Ajk , the expression ( −1) j+k will evaluate to either 1 or −1, depending on whether j + k is even or odd. For example, the cofactor of a12 is C12 = ( −1) 1+2 det A12 = − det A12 and the cofactor of a13 is C13 = ( −1) 1+3 det A13 = det A13 . We can also compute the cofactor of the other entries of A in the obvious way. For example, the cofactor of a23
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
is C23 = ( −1) 2+3 det A23 = − det A23 . A helpful way to remember the sign ( −1) j+k of a cofactor is to use the matrix + − + − + − + − + . This works not just for 3 × 3 matrices but for any square n × n matrix. Example 11.4. Compute the determinant of the matrix A = 4 −2 32 3 51 0 6 > 91 Determinants Solution. From the definition of the determinant det A = a11 C11 + a12 C12 + a13 C13 = (4) det A11 − (−2) det A12 + (3) det A13 = 4 ∣∣∣∣3 50 6 ∣∣∣∣ + 2 ∣∣∣∣2 51 6 ∣∣∣∣ + 3 ∣∣∣∣2 31 0 ∣∣∣∣ = 4(3 · 6 − 5 · 0) + 2(2 · 6 − 1 · 5) + 3(2 · 0 − 1 · 3) = 72 + 14 − 9= 77 We can compute the determinant of a matrix A by expanding along any row or column. For example, the expansion of the determinant for the matrix A = a11 a12 a13 a21 a22 a23 a31 a32 a33 along the 3rd row is det A = a31 ∣∣∣∣a12 a13 a22 a23 ∣∣∣∣ − a32 ∣∣∣∣a11 a13 a21 a23 ∣∣∣∣ + a33 ∣∣∣∣a11 a12 a21 a22 ∣∣∣∣ . And along the 2nd column: det A = −a12 ∣∣∣∣a21 a23 a31 a33 ∣∣∣∣ + a22 ∣∣∣∣a11 a13 a31 a33 ∣∣∣∣ − a32 ∣∣∣∣a11 a13 a21 a23 ∣∣∣∣ . The punchline is that any way you choose to expand (row or column) you will get the same answer. If a particular row or column contains zeros, say entry ajk , then the computation of the determinant is simplified if you expand
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
along either row j or column k because ajk Cjk = 0 and we need not compute Cjk . Example 11.5. Compute the determinant of the matrix A = 4 −2 32 3 51 0 6 Solution. In Example 11.4 , we computed det( A) = 77 by expanding along the 1st row. > 92 Lecture 11 Notice that a32 = 0. Expanding along the 3rd row: det A = (1) det A31 − (0) det A32 + (6) det A33 = ∣∣∣∣−2 33 5 ∣∣∣∣ + 6 ∣∣∣∣4 −22 3 ∣∣∣∣ = 1( −2 · 5 − 3 · 3) + 6(4 · 3 − (−2) · 2) = −19 + 96 = 77 # 11.2 Determinants of n × n Matrices Using the 3 × 3 case as a guide, we define the determinant of a general n × n matrix as follows. Definition 11.6: Let A be a n × n matrix. Let Ajk be the ( n − 1) × (n − 1) matrix obtained from A by deleting the jth row and kth column, and let Cjk = ( −1) j+k det Ajk be the ( j, k )-cofactor of A. The determinant of A is defined to be det A = a11 C11 + a12 C12 + · · · + a1nC1n. The next theorem tells us that we can compute the determinant by expanding along any row or column. Theorem 11.7: Let A be a n × n matrix. Then det A may be obtained by a cofactor expansion along any row or any column of A:det A = aj1Cj1 + aj2Cj2 + · · · + ajn Cjn . We obtain two immediate corollaries. Corollary 11.8: If A has a row or column containing all zeros then det A =
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
0. Proof. If the jth row contains all zeros then aj1 = aj2 = · · · = ajn = 0: det A = aj1Cj1 + aj2Cj2 + · · · + ajn Cjn = 0 . > 93 Determinants Corollary 11.9: For any square matrix A it holds that det A = det AT . Sketch of the proof. Expanding along the jth row of A is equivalent to expanding along the jth column of AT . Example 11.10. Compute the determinant of A = 1 3 0 −21 2 −2 −10 0 2 1 −1 −3 1 0 Solution. The third row contains two zeros, so expand along this row: det A = 0 det A31 − 0 det A32 + 2 det A33 − det A34 = 2 ∣∣∣∣∣∣ 1 3 −21 2 −1 −1 −3 0 ∣∣∣∣∣∣ − ∣∣∣∣∣∣ 1 3 01 2 −2 −1 −3 1 ∣∣∣∣∣∣ = 2 ( 1 ∣∣∣∣ 2 −1 −3 0 ∣∣∣∣ − 3 ∣∣∣∣ 1 −1 −1 0 ∣∣∣∣ − 2 ∣∣∣∣ 1 2 −1 −3 ∣∣∣∣) − ( 1 ∣∣∣∣ 2 −2 −3 1 ∣∣∣∣ − 3 ∣∣∣∣ 1 −2 −1 1 ∣∣∣∣) = 2((0 − 3) − 3(0 − 1) − 2( −3 + 2)) − ((2 − 6) − 3(1 − 2)) = 5 Example 11.11. Compute the determinant of A = 1 3 0 −21 2 −2 −10 0 2 1 −1 −3 1 0 > 94 Lecture 11 Solution. Expanding along the second row: det A = − det A21 + 2 det A22 − (−2) det A23 − 1 det A24 = − ∣∣∣∣∣∣ 3 0 −20 2 1 −3 1 0 ∣∣∣∣∣∣ + 2 ∣∣∣∣∣∣ 1 0 −20 2 1 −1 1 0 ∣∣∣∣∣∣ + 2 ∣∣∣∣∣∣ 1
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
3 −20 0 1 −1 −3 0 ∣∣∣∣∣∣ − ∣∣∣∣∣∣ 1 3 00 0 2 −1 −3 1 ∣∣∣∣∣∣ = −1( −3 − 12) + 2( −1 − 4) + 2(0) − (0) = 5 # 11.3 Triangular Matrices Below we introduce a class of matrices for which the determinant computation is trivial. Definition 11.12: A square matrix A ∈ Rn×n is called upper triangular if ajk = 0 whenever j > k . In other words, all the entries of A below the diagonal entries aii are zero. It is called lower triangular if ajk = 0 whenever j 95 Determinants > 96 Lecture 12 # Lecture 12 Properties of the Determinant # 12.1 ERO and Determinants Recall that for a matrix A ∈ Rn×n we defined det A = aj1Cj1 + aj2Cj2 + · · · + ajn Cjn where the number Cjk = ( −1) j+k det Ajk is called
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
the ( j, k )-cofactor of A and aj = [aj1 aj2 · · · ajn ] denotes the jth row of A. Notice that det A = [aj1 aj2 · · · ajn ] Cj1 Cj2 ... Cjn . If we let cj = [Cj1 Cj2 · · · Cjn ] then det A = aj · cTj . In this lecture, we will establish properties of the determinant under elementary row opera-tions and some consequences. The following theorem describes how the determinant behaves under elementary row operations of Type 1. Theorem 12.1: Suppose that A ∈ Rn×n and let B be the matrix obtained by interchang-ing two rows of A. Then det B = − det A. Proof. Consider the 2 × 2 case. Let A = [a11 a12 a21 a22 ] and let B = [a21 a22 a11 a12 ] . Then det B = a12 a21 − a11 a22 = −(a11 a22 − a12 a21 ) = − det A. The general case is proved by induction. This theorem leads to the following corollary. 97 Properties of the Determinant Corollary 12.2: If A ∈ Rn×n has two rows (or two columns) that are equal then det( A) = 0. Proof. Suppose that A has rows j and k that are equal. Let B be the matrix obtained by interchanging rows j and k. Then by the previous theorem det B = − det A. But clearly B = A, and therefore det B = det A. Therefore, det( A) = − det( A) and thus det A = 0. Now we consider how the determinant behaves under elementary row operations of Type 2. Theorem 12.3: Let A ∈ Rn×n and let B be the matrix obtained by multiplying a row of A by β.
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
Then det B = β det A. Proof. Suppose that B is obtained from A by multiplying the jth row by β. The rows of A and B different from j are equal, and therefore Bjk = Ajk , for k = 1 , 2, . . . , n .In particular, the ( j, k ) cofactors of A and B are equal. The jth row of B is βaj . Then, expanding det B along the jth row: det B = ( βaj ) · cTj = β(aj · cTj )= β det A. Lastly we consider Type 3 elementary row operations. Theorem 12.4: Let A ∈ Rn×n and let B be the matrix obtained from A by adding β times the kth row to the jth row. Then det B = det A. Proof. For any matrix A and any row vector r = [ r1 r2 · · · rn] the expression r · cTj = r1Cj1 + r2Cj2 + · · · + rnCjn is the determinant of the matrix obtained from A by replacing the jth row with the row r.Therefore, if k 6 = j then ak · cTj = 0 > 98 Lecture 12 since then rows k and j are equal. The jth row of B is bj = aj + βak. Therefore, expanding det B along the jth row: det B = ( aj + βak) · cTj = aj · cTj + β (ak · cTj ) = det A. Example 12.5. Suppose that A is a 4 × 4 matrix and suppose that det A = 11. If B is obtained from A by interchanging rows 2 and 4, what is det B? Solution. Interchanging (or swapping) rows changes the sign of the determinant. Therefore, det B =
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
−11 . Example 12.6. Suppose that A is a 4 × 4 matrix and suppose that det A = 11. Let a1, a2, a3, a4 denote the rows of A. If B is obtained from A by replacing row a3 by 3 a1 + a3,what is det B? Solution. This is a Type 3 elementary row operation, which preserves the value of the de-terminant. Therefore, det B = 11 . Example 12.7. Suppose that A is a 4 × 4 matrix and suppose that det A = 11. Let a1, a2, a3, a4 denote the rows of A. If B is obtained from A by replacing row a3 by 3 a1 + 7 a3,what is det B? Solution. This is not quite a Type 3 elementary row operation because a3 is multiplied by 7. The third row of B is b3 = 3 a1 + 7 a3. Therefore, expanding det B along the third row det B = (3 a1 + 7 a3) · cT > 3 = 3 a1 · cT > 3 + 7 a3 · cT > 3 = 7( a3 · cT > 3 )= 7 det A = 77 > 99 Properties of the Determinant Example 12.8. Suppose that A is a 4 × 4 matrix and suppose that det A = 11. Let a1, a2, a3, a4 denote the rows of A. If B is obtained from A by replacing row a3 by 4 a1 + 5 a2,what is det B? Solution. Again, this is not a Type 3 elementary row operation. The third row of B is b3 = 4 a1 + 5 a2. Therefore, expanding det B along the third row det B = (4 a1 + 5 a2) · cT > 3 = 4 a1 · cT > 3 + 5
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
a2 · cT > 3 = 0 + 0 = 0 # 12.2 Determinants and Invertibility of Matrices The following theorem characterizes invertibility of matrices with the determinant. Theorem 12.9: A square matrix A is invertible if and only if det A 6 = 0. Proof. Beginning with the matrix A, perform elementary row operations and generate a sequence of matrices A1, A2, . . . , Ap such that Ap is in row echelon form and thus triangular: A ∼ A1 ∼ A2 ∼ · · · ∼ Ap. Thus, matrix Ai is obtained from Ai−1 by performing one of the elementary row operations. From Theorems 12.1 , 12.3 , 12.4 , if det Ai−1 6 = 0 then det Ai 6 = 0. In particular, det A = 0 if and only if det Ap = 0. Now, Ap is triangular and therefore its determinant is the product of its diagonal entries. If all the diagonal entries are non-zero then det A = det Ap 6 = 0. In this case, A is invertible because there are r = n leading entries in Ap. If a diagonal entry of Ap is zero then det A = det Ap = 0. In this case, A is not invertible because there are r 100 Lecture 12 Proof. Consider the 2 × 2 case: det( βA) = ∣∣∣∣βa 11 βa 12
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
βa 12 βa 22 ∣∣∣∣ = βa 11 · βa 22 − βa 12 · βa 21 = β2(a11 a22 − a12 a21 )= β2 det A. Thus, the statement holds for 2 × 2 matrices. Consider a 3 × 3 matrix A. Then det( βA) = βa 11 |βA11 | − βa 12 |βA12 | + βa 13 |βA13 | = βa 11 β2|A11 | − βa 12 β2|A12 | + βa 13 β2|A13 | = β3 (a11 |A11 | − a12 |A12 | + a13 |A13 |)= β3 det A. The general case can be treated using mathematical induction on n. Example 12.11. Suppose that A is a 4 × 4 matrix and suppose that det A = 11. What is det(3 A)? Solution. We have det(3 A) = 3 4 det A = 81 · 11 = 891 The following theorem characterizes how the determinant behaves under matrix multi-plication. Theorem 12.12: Let A and B be n × n matrices. Then det( AB ) = det( A) det( B). Corollary 12.13: For any square matrix det( Ak) = (det A)k. > 101 Properties of the Determinant Corollary 12.14: If A is invertible then det( A−1) = 1 det A . Proof. From AA −1 = In we have that det( AA −1) = 1. But also det( AA −1) = det( A) det( A−1). Therefore det( A) det( A−1) = 1 or equivalently det A−1 = 1 det A . Example 12.15. Let A, B, C be n × n matrices. Suppose that det A = 3, det B = 0, and det C = 7. (i) Is AC invertible? (ii) Is AB invertible? (iii) Is ACB invertible? Solution. (i): We have det( AC ) = det A det C = 3 · 7 = 21. Thus,
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
AC is invertible. (ii): We have det( AB ) = det A det B = 3 · 0 = 0. Thus, AB is not invertible. (iii): We have det( ACB ) = det A det C det B = 3 ·7·0 = 0. Thus, ACB is not invertible. After this lecture you should know the following: • how the determinant behaves under elementary row operations • that A is invertible if and only if det A 6 = 0 • that det( AB ) = det( A) det( B) > 102 Lecture 13 # Lecture 13 Applications of the Determinant # 13.1 The Cofactor Method Recall that for A ∈ Rn×n we defined det A = aj1Cj1 + aj2Cj2 + · · · + ajn Cjn where Cjk = ( −1) j+k det Ajk is called the ( j, k )-Cofactor of A and aj = [aj1 aj2 · · · ajn ] is the jth row of A. If cj = [Cj1 Cj2 · · · Cjn ] then det A = [aj1 aj2 · · · ajn ] Cj1 Cj2 ... Cjn = aj · cTj . Suppose that B is the matrix obtained from A by replacing row aj with a distinct row ak.To compute det B expand along its jth row bj = ak:det B = ak · cTj = 0 . The Cofactor Method is an alternative method to find the inverse of an invertible matrix. Recall that for any matrix A ∈ Rn×n, if we expand along the jth row then det A = aj · cTj . On the other hand, if j 6 = k then aj · cTk = 0 . In summary, aj · cTk = { det A, if j = k 0, if j 6 = k.
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
103 Applications of the Determinant Form the Cofactor matrix Cof( A) = C11 C12 · · · C1n C21 C22 · · · C2n ... ... · · · ... Cn1 Cn2 · · · Cnn = c1 c2 ... cn . Then, A(Cof( A)) T = a1 a2 ... an [cT > 1 cT > 2 · · · cTn ] = a1cT > 1 a1cT > 2 · · · a1cTn a2cT > 1 a2cT > 2 · · · a2cTn ... ... . . . ... ancT > 1 ancT > 2 · · · ancTn = det A 0 · · · 00 det A · · · 0... ... . . . ...0 0 · · · det A This can be written succinctly as A(Cof( A)) T = det( A)In. Now if det A 6 = 0 then we can divide by det A to obtain A ( 1 det A ) (Cof( A)) T = In. This leads to the following formula for the inverse: A−1 = 1 det A(Cof( A)) T Although this is an explicit and elegant formula for A−1, it is computationally intensive, even for 3 × 3 matrices. However, for the 2 × 2 case it provides a useful formula to compute > 104 Lecture 13 the matrix inverse. Indeed, if A = [a bc d ] we have Cof( A) = [ d −c −b a ] and therefore A−1 = 1 ad − bc [ d −b −c a ] . When does an integer matrix have an integer inverse? We can answer this question using the Cofactor Method. Let us first be clear about what we mean by an integer matrix. Definition 13.1: A matrix A ∈
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
Rm×n is called an integer matrix if every entry of A is an integer. Suppose that A ∈ Rn×n is an invertible integer matrix. Then det( A) is a non-zero integer and (Cof( A)) T is an integer matrix. If A−1 is also an integer matrix then det( A−1) is also an integer. Now det( A) det( A−1) = 1 thus it must be the case that det( A) = ±1. Suppose on the other hand that det( A) = ±1. Then by the Cofactor method A−1 = 1 det( A)(Cof( A)) T = ±(Cof( A)) T and therefore A−1 is also an integer matrix. We have proved the following. Theorem 13.2: An invertible integer matrix A ∈ Rn×n has an integer inverse A−1 if and only if det A = ±1. We can use the previous theorem to generate integer matrices with an integer inverse as follows. Begin with an upper triangular matrix M0 having integer entries and whose diagonal entries are either 1 or −1. By construction, det( M0) = ±1. Perform any sequence of elementary row operations of Type 1 and Type 3. This generates a sequence of matrices M1, . . . , Mp whose entries are integers. Moreover, M0 ∼ M1 ∼ · · · ∼ Mp. Therefore, ±1 = det( M) = det( M1) = · · · = det( Mp). > 105 Applications of the Determinant # 13.2 Cramer’s Rule The Cofactor method can be used to give an explicit formula for the solution of a linear system where the coefficient matrix is invertible. The formula is known as Cramer’s Rule. To derive this formula, recall that if A is invertible then the solution to Ax = b is x = A−1b.Using the Cofactor method, A−1 = 1 > det A (Cof( A))
|
{
"page_id": null,
"source": 6828,
"title": "from dpo"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.