Q
stringlengths 4
3.96k
| A
stringlengths 1
3k
| Result
stringclasses 4
values |
|---|---|---|
\\( - x = 7 \\) .
|
Since \\( - - x \\) is actually \\( - 1 \\cdot x \\) and \\( \\left( {-1}\\right) \\left( {-1}\\right) = 1 \\), we can isolate \\( x \\) by multiplying both sides of the\n\nequation by -1 .\n\n\\[ \n\\left( {-1}\\right) \\left( {-x}\\right) = - 1 \\cdot 7 \n\\]\n\n\\[ \nx = - 7 \n\\]\n\nCheck: When \\( x = 7 \\) ,\n\n\\[ \n- x = 7 \n\\]\n\nbecomes\n\n\\( - \\left( {-7}\\right) \\geqq 7 \\)\n\n\\( 7 \\leq 7 \\)\n\nThe solution to \\( - x = 7 \\) is \\( x = - 7 \\) .
|
No
|
[{6x} - 4 = - {16}]
|
-4 is associated with \( x \) by subtraction. Undo the association by adding 4 to both sides.\n\n\[{6x} - 4 + 4 = - {16} + 4\]\n\n\[{6x} = - {12}\]\n\n6 is associated with \( x \) by multiplication. Undo the association by dividing both sides by 6\n\n\[\frac{6x}{6} = \frac{-{12}}{6}\]\n\n\[x = - 2\]
|
Yes
|
\( {5m} - 6 - {4m} = {4m} - 8 + {3m} \)
|
Begin by solving this equation by combining like terms. \( m - 6 = {7m} - 8 \) Choose a side on which to isolate \( m \). Since 7 is greater than 1, weβll isolate \( m \) on the right side.\n\nSubtract \( m \) from both sides.\n\n\[ \n- m - 6 - m = {7m} - 8 - m \n\]\n\n\[ \n- 6 = {6m} - 8 \n\]\n\n8 is associated with \( m \) by subtraction. Undo the association by adding 8 to both sides.\n\n\[ \n- 6 + 8 = {6m} - 8 + 8 \n\]\n\n\( 2 = {6m} \)\n\n6 is associated with \( \mathrm{m} \) by multiplication. Undo the association by dividing both sides by 6 .\n\n\[ \n\frac{2}{6} = \frac{6m}{6}\text{Reduce.} \n\]\n\n\[ \n\frac{1}{3} = m \n\]\n\nNotice that if we had chosen to isolate \( m \) on the left side of the equation rather than the right side, we would have proceeded as follows:\n\n\[ \nm - 6 = {7m} - 8 \n\]\n\nSubtract \( {7m} \) from both sides.\n\n\[ \nm - 6 - {7m} = {7m} - 8 - {7m} \n\]\n\n\[ \n- {6m} - 6 = - 8 \n\]\n\nAdd 6 to both sides,\n\n\[ \n- {6m} - 6 + 6 = - 8 + 6 \n\]\n\n\[ \n- {6m} = - 2 \n\]\n\nDivide both sides by -6 .\n\n\[ \n\frac{-{6m}}{-6} = \frac{-2}{-6} \n\]\n\n\[ \nm = \frac{1}{3} \n\]\n\nThis is the same result as with the previous approach.
|
Yes
|
\( \frac{8x}{7} = - 2 \)
|
7 is associated with \( x \) by division. Undo the association by multiplying both sides by 7 .\n\n\[ \overset{β}{)7} \cdot \frac{8x}{\overset{β}{)7}} = 7\left( { - 2}\right) \]\n\n\[ 7 \cdot \frac{8x}{7} = - {14} \]\n\n\( {8x} = - {14} \)\n\n8 is associated with \( x \) by multiplication. Undo the association by dividing both sides by 8 .\n\n\[ \frac{\overset{β}{)8}x}{\overset{β}{)8}} = \frac{ - 7}{4} \]\n\n\[ x = \frac{-7}{4} \]
|
Yes
|
[ \underset{9}{\underbrace{\text{ Nine }}}\underset{ + }{\underbrace{\text{ more than }}}\underset{x}{\underbrace{\text{ some number }}} ]
|
Translation: \( 9 + x \) .
|
Yes
|
[
Example 11.36
\[
\underset{18}{\underbrace{\text{ Eighteen }}}\underset{ - }{\underbrace{\text{ minus }}}\underset{x}{\underbrace{\text{ a number }}}
\]
]
|
[
Translation: \( {18} - x \) .
]
|
Yes
|
\[ \underset{y}{\underbrace{\text{ A quantity less five. }}} \]
|
Translation: \( y - 5 \) .
|
Yes
|
\( \underset{4}{\underbrace{\text{ Four }}}\underset{.}{\underbrace{\text{ times }}}\underset{x}{\underbrace{\text{ a number }}}\underset{ = }{\underbrace{\text{ is }}}\underset{16}{\underbrace{\text{ sixteen. }}} \)
|
Translation: \( {4x} = {16} \)
|
Yes
|
[ \underset{\frac{1}{5}}{\underbrace{\text{ One fifth }}}\underset{\text{. }}{\underbrace{\text{ of }}}\underset{n}{\underbrace{\text{ a number }}}\underset{ = }{\underbrace{\text{ is }}}\underset{30}{\underbrace{\text{ thirty. }}} ]
|
Translation: \( \frac{1}{5}n = {30} \), or \( \frac{n}{5} = {30} \)
|
Yes
|
\[ \underset{5}{\underbrace{\text{ Five }}}\underset{x}{\underbrace{\text{ times }}}\underset{x}{\underbrace{\text{ a number }}}\underset{ = }{\underbrace{\text{ is }}}\underset{2}{\underbrace{\text{ two }}}\underset{ + }{\underbrace{\text{ more than }}}\underset{2 \cdot }{\underbrace{\text{ twice }}}\underset{x}{\underbrace{\text{ the number }}} \]
|
Translation: \( {5x} = 2 + {2x} \)
|
Yes
|
Sometimes the structure of the sentence indicates the use of grouping symbols. We'll be alert for commas. They set off terms.
|
Translation: \( \frac{x}{4} - 6 = {12} \)
|
No
|
What number decreased by six is five?
|
Step 1: Let \( n \) represent the unknown number.\n\nStep 2: Translate the words to mathematical symbols and construct an equation. Read phrases. What number: \( n \)\ndecreased by: -\nsix: \( \;6\} n - 6 = 5 \)\nis: \( \; = \)\nfive: \( \;5 \)\n\nStep 3: Solve this equation.\n\n\( n - 6 = 5 \) Add 6 to both sides.\n\n\[ n - 6 + 6 = 5 + 6 \]\n\n\[ n = {11} \]\n\nStep 4: Check the result.\n\nWhen 11 is decreased by 6 , the result is 11 -6 , which is equal to 5 . The solution checks.\n\nStep 5: The number is 11.
|
Yes
|
The sum of three consecutive odd integers is equal to one less than twice the first odd integer. Find the three integers.
|
Let \( \;n = \) the first odd integer. Then,\nStep 1. \( \;n + 2 = \) the second odd integer, and\n\( n + 4 = \) the third odd integer.\nStep 2. Translate the words to mathematical symbols and construct an equation. Read phrases. The sum of: \( \; \) add some numbers\nthree consecutive odd integers: \( n, n + 2, n + 4 \)\nis equal to:\n\[ n + \left( {n + 2}\right) + \left( {n + 4}\right) = {2n} - 1 \]\none less than: subtract 1 from\ntwice the first odd integer: \( {2n} \)\n\[ n + n + 2 + n + 4 = {2n} - 1 \]\n\[ {3n} + 6 = {2n} - 1 \]\nSubtract \( {2n} \) from both sides.\n\[ {3n} + 6 - {2n} = {2n} - 1 - {2n} \]\n\[ n + 6 = - 1 \]\nSubtract 6 from both sides.\nStep 3.\n\[ n + 6 - 6 = - 1 - 6 \]\n\[ n = - 7 \]\nThe first integer is -7 .\n\[ n + 2 = - 7 + 2 = - 5 \]\nThe second integer is -5 .\n\[ n + 4 = - 7 + 4 = - 3 \]\nThe third integer is -3 .\nStep 4. Check this result.\nThe sum of the three integers is \( - 7 + \left( {-5}\right) + \left( {-3}\right) = - {12} + \left( {-3}\right) \)\nOne less than twice the first integer is \( 2\left( {-7}\right) - 1 = - {14} - 1 = - {15} \) . Since these two results are equal, the solution checks.\nStep 5. The three odd integers are -7, -5, -3.
|
Yes
|
The perimeter (length around) of a rectangle is 20 meters. If the length is 4 meters longer than the width, find the length and width of the rectangle.
|
Step 1: Let \( x = \) the width of the rectangle. Then, \( x + 4 = \) the length of the rectangle. Step 2: The length around the rectangle is \( x + x + 4 + x + x + 4 = 20 \). Step 3: \( 4x + 8 = 20 \) Subtract 8 from both sides. \( 4x = 12 \) Divide both sides by 4. \( x = 3 \) Then, \( x + 4 = 3 + 4 = 7 \). Step 4: Check: Step 5: The length of the rectangle is 7 meters. The width of the rectangle is 3 meters.
|
Yes
|
If we had only one processor working on this task, it is easy to determine the finishing time; just add up the individual times. We assume one person can't work on two tasks at the same time, ignore things like drying times during which someone could work on another task. Scheduling with one processor, a possible schedule would look like this, with a finishing time of 10 days.
|
<table><tr><td>Time:</td><td>0</td><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td><td>7</td><td>8</td><td>9</td><td>10</td></tr><tr><td>\( {\mathrm{P}}_{1} \)</td><td>\( {\mathrm{T}}_{1} \)</td><td>|</td><td>\( {\mathrm{T}}_{3} \)</td><td>1</td><td></td><td>\( {\mathbf{T}}_{5} \)</td><td>|</td><td>\( {\mathrm{T}}_{2} \)</td><td>\( {\mathrm{T}}_{6} \)\( {\mathrm{T}}_{7} \)</td><td>\( {\mathrm{T}}_{8} \)\( \mathrm{{T}_{9}} \)</td><td></td></tr></table>
|
Yes
|
Suppose that we have collected weights from 100 male subjects as part of a nutrition study. For our weight data, we have values ranging from a low of 121 pounds to a high of 263 pounds, giving a total span of \( {263} - {121} = {142} \) . We could create 7 intervals with a width of around 20, 14 intervals with a width of around 10 , or somewhere in between. Often time we have to experiment with a few possibilities to find something that represents the data well. Let us try using an interval width of 15 . We could start at 121 , or at 120 since it is a nice round number.
|
<table><thead><tr><th>Interval</th><th>Frequency</th></tr></thead><tr><td>120 - 134</td><td>4</td></tr><tr><td>135 β 149</td><td>14</td></tr><tr><td>150 β 164</td><td>16</td></tr><tr><td>165 β 179</td><td>28</td></tr><tr><td>180 β 194</td><td>12</td></tr><tr><td>195 β 209</td><td>8</td></tr><tr><td>210 β 224</td><td>7</td></tr><tr><td>225 β 239</td><td>6</td></tr><tr><td>240 β 254</td><td>2</td></tr><tr><td>255 - 269</td><td>3</td></tr></table>
|
Yes
|
The number of touchdown (TD) passes thrown by each of the 31 teams in the National Football League in the 2000 season are shown below.\n\n37 33 33 32 29 28 28 23 22 22 22 21 21 21 20\n\n201919181818181615141414121296
|
Adding these values, we get 634 total TDs. Dividing by 31, the number of data values, we get \( {634}/{31} = {20.4516} \) . It would be appropriate to round this to 20.5 .\n\nIt would be most correct for us to report that \
|
No
|
Find the median of these quiz scores: 5 10 8 6 4 8 2 5 7 7
|
We start by listing the data in order: 2 4 5 5 6 7 7 8 8 10\n\nSince there are 10 data values, an even number, there is no one middle number. So we find the mean of the two middle numbers,6 and 7, and get \( \left( {6 + 7}\right) /2 = {6.5} \) .\n\nThe median quiz score was 6.5.
|
No
|
The box plot below is based on the household income data with 5 number summary: \( {15},{27.5},{35},{40},{50} \)
|

|
No
|
Use a Venn diagram to illustrate \( {\left( H \cap F\right) }^{c} \cap W \)
|
Weβll start by identifying everything in the set \( H \cap F \)\n\n\n\nNow, \( {\left( H \cap F\right) }^{c} \cap W \) will contain everything not in the set identified above that is also in set \( W \).\n\n
|
Yes
|
Multiply: \( \left( {2 + {5i}}\right) \left( {4 + i}\right) \) .
|
\n\n\( \left( {2 + {5i}}\right) \left( {4 + i}\right) \) Expand\n\n\( = 8 + {20i} + {2i} + 5{i}^{2} \) Since \( i = \sqrt{-1},{i}^{2} = - 1 \)\n\n\( = 8 + {20i} + {2i} + 5\left( {-1}\right) \) Simplify\n\n\( = 3 + {22i} \)
|
Yes
|
Visualize the product \( i\left( {1 + {2i}}\right) \)
|
Multiplying, we'd get\n\n\( i \cdot 1 + i \cdot {2i} \)\n\n\[ \n= i + 2{i}^{2} \n\]\n\n\[ \n= i + 2\left( {-1}\right) \n\]\n\n\[ \n= - 2 + i \n\]\n\nIn this case, the distance from the origin has not changed, but the point has been rotated about the origin, \( {90}^{ \circ } \) counter-clockwise.
|
Yes
|
If I go to the mall, then I'll buy new jeans. If I buy new jeans, I'll buy a shirt to go with it. Conclusion: If I go to the mall, I'll buy a shirt.
|
Let \( m = \mathrm{I} \) go to the mall, \( j = \mathrm{I} \) buy jeans, and \( s = \mathrm{I} \) buy a shirt. The premises and conclusion can be stated as: Premise: \( \;m \rightarrow j \) Premise: \( \;j \rightarrow s \) Conclusion: \( \;m \rightarrow s \) We can construct a truth table for \( \left\lbrack {\left( {m \rightarrow j}\right) \land \left( {j \rightarrow s}\right) }\right\rbrack \rightarrow \left( {m \rightarrow s}\right) \). Try to recreate each step and see how the truth table was constructed. <table><thead><tr><th>m</th><th>\( j \)</th><th>\( S \)</th><th>\( m \rightarrow j \)</th><th>\( j \rightarrow s \)</th><th>\( \left( {m \rightarrow j}\right) \land \left( {j \rightarrow s}\right) \)</th><th>\( m \rightarrow s \)</th><th>\( \left\lbrack {\left( {m\overset{ \rightarrow }{j}}\right) \bigwedge \left( {j\overset{ \rightarrow }{s}}\right) }\right\rbrack \rightarrow \left( {m\overset{ \rightarrow }{s}}\right) \)</th></tr></thead><tr><td>T</td><td>T</td><td>T</td><td>T</td><td>T</td><td>T</td><td>T</td><td>\( \mathrm{T} \)</td></tr><tr><td>T</td><td>T</td><td>\( \mathrm{F} \)</td><td>T</td><td>F</td><td>\( \mathrm{F} \)</td><td>F</td><td>\( \mathrm{T} \)</td></tr><tr><td>\( \mathrm{T} \)</td><td>F</td><td>T</td><td>F</td><td>T</td><td>F</td><td>T</td><td>T</td></tr><tr><td>\( \mathrm{T} \)</td><td>F</td><td>\( \mathrm{F} \)</td><td>F</td><td>\( \mathrm{T} \)</td><td>F</td><td>F</td><td>\( \mathrm{T} \)</td></tr><tr><td>F</td><td>T</td><td>T</td><td>T</td><td>T</td><td>\( \mathrm{T} \)</td><td>T</td><td>\( \mathrm{T} \)</td></tr><tr><td>F</td><td>T</td><td>\( \mathrm{F} \)</td><td>T</td><td>F</td><td>F</td><td>T</td><td>\( \mathrm{T} \)</td></tr><tr><td>F</td><td>F</td><td>T</td><td>T</td><td>\( \mathrm{T} \)</td><td>T</td><td>T</td><td>\( \mathrm{T} \)</td></tr><tr><td>F</td><td>F</td><td>\( \mathrm{F} \)</td><td>T</td><td>T</td><td>T</td><td>\( \mathrm{T} \)</td><td>\( \mathrm{T} \)</td></tr></table> From the final column of the truth table, we can see this is a valid argument.
|
Yes
|
If I drop my phone into the swimming pool, my phone will be ruined.
|
If we let \( d = \mathrm{I} \) drop the phone in the pool and \( r = \) the phone is ruined, then we can represent the argument this way: Premise \( \;d \rightarrow r \) Premise \( \; \sim r \) Conclusion: \( \sim d \) The form of this argument matches what we need to invoke the law of contraposition, so it is a valid argument.
|
Yes
|
Example 39\n\nPremise: : I can either drive or take the train.\n\nPremise: I refuse to drive.\n\nConclusion: I will take the train.\n\nIf we let \( d = \mathrm{I} \) drive and \( t = \mathrm{I} \) take the train, then the symbolic representation of the argument\n\nis:\n\nPremise \( \;d \vee t \)\n\nPremise \( \; \sim d \)\n\nConclusion: \( t \)
|
This argument is valid because it has the form of a disjunctive syllogism. I have two choices, and one of them is not going to happen, so the other one must happen.
|
Yes
|
If I don't buy a boat, I must not have worked hard.
|
If we let \( h = \) working hard, \( r = \) getting a raise, and \( b = \) buying a boat, then we can represent our argument symbolically: Premise \( \;h \rightarrow r \) Premise \( \;r \rightarrow b \) Conclusion: \( \; \sim b \rightarrow \sim h \) Using the transitive property with the two premises, we can conclude that \( h \rightarrow b \) ; if I work hard, then I will buy a boat. When we learned about the contrapositive, we saw that the conditional statement \( h \rightarrow b \) is equivalent to \( \sim b \rightarrow \sim h \) . Therefore, the conclusion is indeed a logical syllogism derived from the premises.
|
Yes
|
Proposition 1.3.10 0 and 1 are unique. Also \( - x \) is unique and \( {x}^{-1} \) is unique. Furthermore, \( {0x} = {x0} = 0 \) and \( - x = \left( {-1}\right) x \) .
|
Proof: Suppose \( {0}^{\prime } \) is another additive identity. Then\n\n\[ \n{0}^{\prime } = {0}^{\prime } + 0 = 0.\n\]\n\nThus 0 is unique. Say \( {1}^{\prime } \) is another multiplicative identity. Then\n\n\[ \n1 = {1}^{\prime }1 = {1}^{\prime }.\n\]\n\nNow suppose \( y \) acts like the additive inverse of \( x \) . Then\n\n\[ \n- x = \left( {-x}\right) + 0 = \left( {-x}\right) + \left( {x + y}\right) = \left( {-x + x}\right) + y = y\n\]\n\nFinally,\n\n\[ \n{0x} = \left( {0 + 0}\right) x = {0x} + {0x}\n\]\n\nand so\n\n\[ \n0 = - \left( {0x}\right) + {0x} = - \left( {0x}\right) + \left( {{0x} + {0x}}\right) = \left( {-\left( {0x}\right) + {0x}}\right) + {0x} = {0x}\n\]\n\nFinally\n\n\[ \nx + \left( {-1}\right) x = \left( {1 + \left( {-1}\right) }\right) x = {0x} = 0\n\]\n\nand so by uniqueness of the additive inverse, \( \left( {-1}\right) x = - x \) . \( \blacksquare \)
|
Yes
|
1. If \( x < y \) and \( y < z \), then \( x < z \) .
|
First consider \( β± \), called the transitive law. Suppose that \( x < y \) and \( y < z \) . Then from the axioms, \( x + y < y + z \) and so, adding \( - y \) to both sides, it follows\n\n\[ x < z \]
|
No
|
Theorem 1.5.4 Let \( r > 0 \) be given. Then if \( n \) is a positive integer,\n\n\[{\left\lbrack r\left( \cos t + i\sin t\right) \right\rbrack }^{n} = {r}^{n}\left( {\cos {nt} + i\sin {nt}}\right) .\]
|
Proof: It is clear the formula holds if \( n = 1 \) . Suppose it is true for \( n \).\n\n\[{\left\lbrack r\left( \cos t + i\sin t\right) \right\rbrack }^{n + 1} = {\left\lbrack r\left( \cos t + i\sin t\right) \right\rbrack }^{n}\left\lbrack {r\left( {\cos t + i\sin t}\right) }\right\rbrack\]\n\nwhich by induction equals\n\n\[= {r}^{n + 1}\left( {\cos {nt} + i\sin {nt}}\right) \left( {\cos t + i\sin t}\right)\]\n\n\[= {r}^{n + 1}\left( {\left( {\cos {nt}\cos t - \sin {nt}\sin t}\right) + i\left( {\sin {nt}\cos t + \cos {nt}\sin t}\right) }\right)\]\n\n\[= {r}^{n + 1}\left( {\cos \left( {n + 1}\right) t + i\sin \left( {n + 1}\right) t}\right)\]\n\nby the formulas for the cosine and sine of the sum of two angles.
|
Yes
|
Corollary 1.5.5 Let \( z \) be a non zero complex number. Then there are always exactly \( k{k}^{th} \) roots of \( z \) in \( \mathbb{C} \) .
|
Proof: Let \( z = x + {iy} \) and let \( z = \left| z\right| \left( {\cos t + i\sin t}\right) \) be the polar form of the complex number. By De Moivre's theorem, a complex number,\n\n\[ r\left( {\cos \alpha + i\sin \alpha }\right) \]\n\nis a \( {k}^{\text{th }} \) root of \( z \) if and only if\n\n\[ {r}^{k}\left( {\cos {k\alpha } + i\sin {k\alpha }}\right) = \left| z\right| \left( {\cos t + i\sin t}\right) . \]\n\nThis requires \( {r}^{k} = \left| z\right| \) and so \( r = {\left| z\right| }^{1/k} \) and also both \( \cos \left( {k\alpha }\right) = \cos t \) and \( \sin \left( {k\alpha }\right) = \sin t \) .\n\nThis can only happen if\n\n\[ {k\alpha } = t + {2l\pi } \]\n\nfor \( l \) an integer. Thus\n\n\[ \alpha = \frac{t + {2l\pi }}{k}, l \in \mathbb{Z} \]\n\nand so the \( {k}^{th} \) roots of \( z \) are of the form\n\n\[ {\left| z\right| }^{1/k}\left( {\cos \left( \frac{t + {2l\pi }}{k}\right) + i\sin \left( \frac{t + {2l\pi }}{k}\right) }\right), l \in \mathbb{Z}. \]\n\nSince the cosine and sine are periodic of period \( {2\pi } \), there are exactly \( k \) distinct numbers which result from this formula.
|
Yes
|
Example 1.5.6 Find the three cube roots of \( i \) .
|
First note that \( i = 1\left( {\cos \left( \frac{\pi }{2}\right) + i\sin \left( \frac{\pi }{2}\right) }\right) \). Using the formula in the proof of the above corollary, the cube roots of \( i \) are\n\n\[ 1\left( {\cos \left( \frac{\left( {\pi /2}\right) + {2l\pi }}{3}\right) + i\sin \left( \frac{\left( {\pi /2}\right) + {2l\pi }}{3}\right) }\right) \]\n\nwhere \( l = 0,1,2 \). Therefore, the roots are\n\n\[ \cos \left( \frac{\pi }{6}\right) + i\sin \left( \frac{\pi }{6}\right) ,\cos \left( {\frac{5}{6}\pi }\right) + i\sin \left( {\frac{5}{6}\pi }\right) ,\]\n\nand\n\n\[ \cos \left( {\frac{3}{2}\pi }\right) + i\sin \left( {\frac{3}{2}\pi }\right) \]\n\nThus the cube roots of \( i \) are \( \frac{\sqrt{3}}{2} + i\left( \frac{1}{2}\right) ,\frac{-\sqrt{3}}{2} + i\left( \frac{1}{2}\right) \), and \( - i \) .
|
Yes
|
Example 1.5.7 Factor the polynomial \( {x}^{3} - {27} \) .
|
First find the cube roots of 27. By the above procedure using De Moivre's theorem, these cube roots are \( 3,3\left( {\frac{-1}{2} + i\frac{\sqrt{3}}{2}}\right) \), and \( 3\left( {\frac{-1}{2} - i\frac{\sqrt{3}}{2}}\right) \). Therefore, \( {x}^{3} + {27} = \n\n\[ \n\left( {x - 3}\right) \left( {x - 3\left( {\frac{-1}{2} + i\frac{\sqrt{3}}{2}}\right) }\right) \left( {x - 3\left( {\frac{-1}{2} - i\frac{\sqrt{3}}{2}}\right) }\right) .\n\] \n\nNote also \( \left( {x - 3\left( {\frac{-1}{2} + i\frac{\sqrt{3}}{2}}\right) }\right) \left( {x - 3\left( {\frac{-1}{2} - i\frac{\sqrt{3}}{2}}\right) }\right) = {x}^{2} + {3x} + 9 \) and so \n\n\[ \n{x}^{3} - {27} = \left( {x - 3}\right) \left( {{x}^{2} + {3x} + 9}\right) \n\] \n\nwhere the quadratic polynomial, \( {x}^{2} + {3x} + 9 \) cannot be factored without using complex numbers.
|
Yes
|
Proposition 1.7.3 Let \( S \) be a nonempty set and suppose \( \sup \left( S\right) \) exists. Then for every \( \delta > 0 \)\n\n\[ S \cap (\sup \left( S\right) - \delta ,\sup \left( S\right) \rbrack \neq \varnothing . \]\n\nIf \( \inf \left( S\right) \) exists, then for every \( \delta > 0 \) ,\n\n\[ S \cap \lbrack \inf \left( S\right) ,\inf \left( S\right) + \delta ) \neq \varnothing . \]
|
Proof: Consider the first claim. If the indicated set equals \( \varnothing \), then \( \sup \left( S\right) - \delta \) is an upper bound for \( S \) which is smaller than \( \sup \left( S\right) \), contrary to the definition of \( \sup \left( S\right) \) as the least upper bound. In the second claim, if the indicated set equals \( \varnothing \), then \( \inf \left( S\right) + \delta \) would be a lower bound which is larger than \( \inf \left( S\right) \) contrary to the definition of \( \inf \left( S\right) \) . β
|
Yes
|
Theorem 1.8.3 (Mathematical induction) A set \( S \subseteq \mathbb{Z} \), having the property that \( a \in S \) and \( n + 1 \in S \) whenever \( n \in S \) contains all integers \( x \in \mathbb{Z} \) such that \( x \geq a \) .
|
Proof: Let \( T \equiv \left( {\lbrack a,\infty }\right) \cap \mathbb{Z}) \smallsetminus S \) . Thus \( T \) consists of all integers larger than or equal to \( a \) which are not in \( S \) . The theorem will be proved if \( T = \varnothing \) . If \( T \neq \varnothing \) then by the well ordering principle, there would have to exist a smallest element of \( T \), denoted as \( b \) . It must be the case that \( b > a \) since by definition, \( a \notin T \) . Then the integer, \( b - 1 \geq a \) and \( b - 1 \notin S \) because if \( b - 1 \in S \), then \( b - 1 + 1 = b \in S \) by the assumed property of \( S \) . Therefore, \( b - 1 \in \left( {\lbrack a,\infty }\right) \cap \mathbb{Z}) \smallsetminus S = T \) which contradicts the choice of \( b \) as the smallest element of \( T \) . ( \( b - 1 \) is smaller.) Since a contradiction is obtained by assuming \( T \neq \varnothing \), it must be the case that \( T = \varnothing \) and this says that everything in \( \lbrack a,\infty ) \cap \mathbb{Z} \) is also in \( S \) .
|
Yes
|
Show that for all \( n \in \mathbb{N},\frac{1}{2} \cdot \frac{3}{4}\cdots \frac{{2n} - 1}{2n} < \frac{1}{\sqrt{{2n} + 1}} \) .
|
If \( n = 1 \) this reduces to the statement that \( \frac{1}{2} < \frac{1}{\sqrt{3}} \) which is obviously true. Suppose then that the inequality holds for \( n \) . Then\n\n\[ \frac{1}{2} \cdot \frac{3}{4}\cdots \frac{{2n} - 1}{2n} \cdot \frac{{2n} + 1}{{2n} + 2} < \frac{1}{\sqrt{{2n} + 1}}\frac{{2n} + 1}{{2n} + 2} \]\n\n\[ = \frac{\sqrt{{2n} + 1}}{{2n} + 2}\text{.} \]\n\nThe theorem will be proved if this last expression is less than \( \frac{1}{\sqrt{{2n} + 3}} \) . This happens if and only if\n\n\[ {\left( \frac{1}{\sqrt{{2n} + 3}}\right) }^{2} = \frac{1}{{2n} + 3} > \frac{{2n} + 1}{{\left( 2n + 2\right) }^{2}} \]\n\nwhich occurs if and only if \( {\left( 2n + 2\right) }^{2} > \left( {{2n} + 3}\right) \left( {{2n} + 1}\right) \) and this is clearly true which may be seen from expanding both sides. This proves the inequality.
|
Yes
|
Proposition 1.8.6 \( \mathbb{R} \) has the Archimedean property.
|
Proof: Suppose it is not true. Then there exists \( x \in \mathbb{R} \) and \( a > 0 \) such that \( {na} \leq x \) for all \( n \in \mathbb{N} \) . Let \( S = \{ {na} : n \in \mathbb{N}\} \) . By assumption, this is bounded above by \( x \) . By completeness, it has a least upper bound \( y \) . By Proposition 1.7.3 there exists \( n \in \mathbb{N} \) such that\n\n\[ y - a < {na} \leq y. \]\n\nThen \( y = y - a + a < {na} + a = \left( {n + 1}\right) a \leq y \), a contradiction.
|
Yes
|
Theorem 1.8.7 Suppose \( x < y \) and \( y - x > 1 \) . Then there exists an integer \( l \in \mathbb{Z} \), such that \( x < l < y \) . If \( x \) is an integer, there is no integer \( y \) satisfying \( x < y < x + 1 \) .
|
Proof: Let \( x \) be the smallest positive integer. Not surprisingly, \( x = 1 \) but this can be proved. If \( x < 1 \) then \( {x}^{2} < x \) contradicting the assertion that \( x \) is the smallest natural number. Therefore,1 is the smallest natural number. This shows there is no integer, \( y \) , satisfying \( x < y < x + 1 \) since otherwise, you could subtract \( x \) and conclude \( 0 < y - x < 1 \) for some integer \( y - x \) .\n\nNow suppose \( y - x > 1 \) and let\n\n\[ S \equiv \{ w \in \mathbb{N} : w \geq y\} .\n\]\n\nThe set \( S \) is nonempty by the Archimedean property. Let \( k \) be the smallest element of \( S \) . Therefore, \( k - 1 < y \) . Either \( k - 1 \leq x \) or \( k - 1 > x \) . If \( k - 1 \leq x \), then\n\n\[ y - x \leq y - \left( {k - 1}\right) = \overset{ \leq 0}{\overbrace{y - k}} + 1 \leq 1\n\]\n\ncontrary to the assumption that \( y - x > 1 \) . Therefore, \( x < k - 1 < y \) . Let \( l = k - 1 \) . β
|
No
|
Theorem 1.8.8 If \( x < y \) then there exists a rational number \( r \) such that \( x < r < y \) .
|
Proof: Let \( n \in \mathbb{N} \) be large enough that\n\n\[ n\left( {y - x}\right) > 1 \]\n\nThus \( \left( {y - x}\right) \) added to itself \( n \) times is larger than 1 . Therefore,\n\n\[ n\left( {y - x}\right) = {ny} + n\left( {-x}\right) = {ny} - {nx} > 1. \]\n\nIt follows from Theorem 1.8.7 there exists \( m \in \mathbb{Z} \) such that\n\n\[ {nx} < m < {ny} \]\n\nand so take \( r = m/n \) . β
|
Yes
|
Theorem 1.8.10 Suppose \( 0 < a \) and let \( b \geq 0 \) . Then there exists a unique integer \( p \) and real number \( r \) such that \( 0 \leq r < a \) and \( b = {pa} + r \) .
|
Proof: Let \( S \equiv \{ n \in \mathbb{N} : {an} > b\} \) . By the Archimedean property this set is nonempty. Let \( p + 1 \) be the smallest element of \( S \) . Then \( {pa} \leq b \) because \( p + 1 \) is the smallest in \( S \) . Therefore,\n\n\[ r \equiv b - {pa} \geq 0. \]\n\nIf \( r \geq a \) then \( b - {pa} \geq a \) and so \( b \geq \left( {p + 1}\right) a \) contradicting \( p + 1 \in S \) . Therefore, \( r < a \) as desired.\n\nTo verify uniqueness of \( p \) and \( r \), suppose \( {p}_{i} \) and \( {r}_{i}, i = 1,2 \), both work and \( {r}_{2} > {r}_{1} \) . Then a little algebra shows\n\n\[ {p}_{1} - {p}_{2} = \frac{{r}_{2} - {r}_{1}}{a} \in \left( {0,1}\right) . \]\n\nThus \( {p}_{1} - {p}_{2} \) is an integer between 0 and 1, contradicting Theorem 1.8.7. The case that \( {r}_{1} > {r}_{2} \) cannot occur either by similar reasoning. Thus \( {r}_{1} = {r}_{2} \) and it follows that \( {p}_{1} = {p}_{2} \) .
|
Yes
|
Theorem 1.9.3 Let \( m, n \) be two positive integers and define\n\n\[ S \equiv \{ {xm} + {yn} \in \mathbb{N} : x, y \in \mathbb{Z}\} . \]\n\nThen the smallest number in \( S \) is the greatest common divisor, denoted by \( \left( {m, n}\right) \) .
|
Proof: First note that both \( m \) and \( n \) are in \( S \) so it is a nonempty set of positive integers. By well ordering, there is a smallest element of \( S \), called \( p = {x}_{0}m + {y}_{0}n \) . Either \( p \) divides \( m \) or it does not. If \( p \) does not divide \( m \), then by Theorem 1.8.10,\n\n\[ m = {pq} + r \]\n\nwhere \( 0 < r < p \) . Thus \( m = \left( {{x}_{0}m + {y}_{0}n}\right) q + r \) and so, solving for \( r \) ,\n\n\[ r = m\left( {1 - {x}_{0}}\right) + \left( {-{y}_{0}q}\right) n \in S. \]\n\nHowever, this is a contradiction because \( p \) was the smallest element of \( S \) . Thus \( p \mid m \) . Similarly \( p \mid n \) .\n\nNow suppose \( q \) divides both \( m \) and \( n \) . Then \( m = {qx} \) and \( n = {qy} \) for integers, \( x \) and \( y \) . Therefore,\n\n\[ p = m{x}_{0} + n{y}_{0} = {x}_{0}{qx} + {y}_{0}{qy} = q\left( {{x}_{0}x + {y}_{0}y}\right) \]\n\nshowing \( q \mid p \) . Therefore, \( p = \left( {m, n}\right) \) . \( \blacksquare \)
|
Yes
|
Example 1.9.5 Find the greatest common divisor of 165 and 385.
|
Use the Euclidean algorithm to write\n\n\[ \n{385} = 2\left( {165}\right) + {55} \]\n\nThus the next two numbers are 55 and 165 . Then\n\n\[ \n{165} = 3 \times {55} \]\n\nand so the greatest common divisor of the first two numbers is 55 .
|
Yes
|
Find the greatest common divisor of 1237 and 4322.
|
Use the Euclidean algorithm\n\n\\[ \n{4322} = 3\\left( {1237}\\right) + {611} \n\\]\n\nNow the two new numbers are 1237,611. Then\n\n\\[ \n{1237} = 2\\left( {611}\\right) + {15} \n\\]\n\nThe two new numbers are 15,611 . Then\n\n\\[ \n{611} = {40}\\left( {15}\\right) + {11} \n\\]\n\nThe two new numbers are 15,11. Then\n\n\\[ \n{15} = 1\\left( {11}\\right) + 4 \n\\]\n\nThe two new numbers are 11,4\n\n\\[ \n2\\left( 4\\right) + 3 \n\\]\n\nThe two new numbers are 4,3 . Then\n\n\\[ \n4 = 1\\left( 3\\right) + 1 \n\\]\n\nThe two new numbers are 3,1 . Then\n\n\\[ \n3 = 3 \\times 1 \n\\]\n\nand so 1 is the greatest common divisor.
|
Yes
|
Theorem 1.9.7 If \( p \) is a prime and \( p \mid {ab} \) then either \( p \mid a \) or \( p \mid b \) .
|
Proof: Suppose \( p \) does not divide \( a \) . Then since \( p \) is prime, the only factors of \( p \) are 1 and \( p \) so follows \( \left( {p, a}\right) = 1 \) and therefore, there exists integers, \( x \) and \( y \) such that\n\n\[ 1 = {ax} + {yp} \]\n\nMultiplying this equation by \( b \) yields\n\n\[ b = {abx} + {ybp}. \]\n\nSince \( p \mid {ab},{ab} = {pz} \) for some integer \( z \) . Therefore,\n\n\[ b = {abx} + {ybp} = {pzx} + {ybp} = p\left( {{xz} + {yb}}\right) \]\n\nand this shows \( p \) divides \( b \) .
|
Yes
|
Theorem 1.9.8 (Fundamental theorem of arithmetic) Let \( a \in \mathbb{N} \smallsetminus \{ 1\} \) . Then \( a = \mathop{\prod }\limits_{{i = 1}}^{n}{p}_{i} \) where \( {p}_{i} \) are all prime numbers. Furthermore, this prime factorization is unique except for the order of the factors.
|
Proof: If \( a \) equals a prime number, the prime factorization clearly exists. In particular the prime factorization exists for the prime number 2. Assume this theorem is true for all \( a \leq n - 1 \) . If \( n \) is a prime, then it has a prime factorization. On the other hand, if \( n \) is not a prime, then there exist two integers \( k \) and \( m \) such that \( n = {km} \) where each of \( k \) and \( m \) are less than \( n \) . Therefore, each of these is no larger than \( n - 1 \) and consequently, each has a prime factorization. Thus so does \( n \) . It remains to argue the prime factorization is unique except for order of the factors.\n\nSuppose\n\n\[ \mathop{\prod }\limits_{{i = 1}}^{n}{p}_{i} = \mathop{\prod }\limits_{{j = 1}}^{m}{q}_{j} \]\n\nwhere the \( {p}_{i} \) and \( {q}_{j} \) are all prime, there is no way to reorder the \( {q}_{k} \) such that \( m = n \) and \( {p}_{i} = {q}_{i} \) for all \( i \), and \( n + m \) is the smallest positive integer such that this happens. Then by Theorem 1.9.7, \( {p}_{1} \mid {q}_{j} \) for some \( j \) . Since these are prime numbers this requires \( {p}_{1} = {q}_{j} \) . Reordering if necessary it can be assumed that \( {q}_{j} = {q}_{1} \) . Then dividing both sides by \( {p}_{1} = {q}_{1} \), \n\n\[ \mathop{\prod }\limits_{{i = 1}}^{{n - 1}}{p}_{i + 1} = \mathop{\prod }\limits_{{j = 1}}^{{m - 1}}{q}_{j + 1} \]\n\nSince \( n + m \) was as small as possible for the theorem to fail, it follows that \( n - 1 = m - 1 \) and the prime numbers, \( {q}_{2},\cdots ,{q}_{m} \) can be reordered in such a way that \( {p}_{k} = {q}_{k} \) for all \( k = 2,\cdots, n \) . Hence \( {p}_{i} = {q}_{i} \) for all \( i \) because it was already argued that \( {p}_{1} = {q}_{1} \), and this results in a contradiction.
|
Yes
|
Find the solutions to the system,\n\n\[ \nx + {3y} + {6z} = {25} \]\n\n\[ \n{2x} + {7y} + {14z} = {58} \]\n\n\[ \n{2y} + {5z} = {19} \]\n
|
To solve this system replace the second equation by \( \\left( {-2}\\right) \) times the first equation added to the second. This yields. the system\n\n\[ \nx + {3y} + {6z} = {25} \]\n\n\[ \ny + {2z} = 8 \]\n\n\[ \n{2y} + {5z} = {19} \]\n\nNow take \( \\left( {-2}\\right) \) times the second and add to the third. More precisely, replace the third equation with \( \\left( {-2}\\right) \) times the second added to the third. This yields the system\n\n\[ \nx + {3y} + {6z} = {25} \]\n\n\[ \ny + {2z} = 8 \]\n\n\[ \nz = 3 \]\n\nAt this point, you can tell what the solution is. This system has the same solution as the original system and in the above, \( z = 3 \) . Then using this in the second equation, it follows \( y + 6 = 8 \) and so \( y = 2 \) . Now using this in the top equation yields \( x + 6 + {18} = {25} \) and so \( x = 1 \) .
|
Yes
|
Example 1.10.3 Give the complete solution to the system of equations, \( {5x} + {10y} - {7z} = - 2 \) , \( {2x} + {4y} - {3z} = - 1 \), and \( {3x} + {6y} + {5z} = 9 \) .
|
The augmented matrix for this system is\n\n\[ \left( \begin{matrix} 2 & 4 & - 3 & - 1 \\ 5 & {10} & - 7 & - 2 \\ 3 & 6 & 5 & 9 \end{matrix}\right) \]\n\nMultiply the second row by 2, the first row by 5, and then take (-1) times the first row and add to the second. Then multiply the first row by \( 1/5 \) . This yields\n\n\[ \left( \begin{matrix} 2 & 4 & - 3 & - 1 \\ 0 & 0 & 1 & 1 \\ 3 & 6 & 5 & 9 \end{matrix}\right) \]\n\nNow, combining some row operations, take (-3) times the first row and add this to 2 times the last row and replace the last row with this. This yields.\n\n\[ \left( \begin{matrix} 2 & 4 & - 3 & - 1 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 1 & {21} \end{matrix}\right) \]\n\nPutting in the variables, the last two rows say \( z = 1 \) and \( z = {21} \) . This is impossible so the last system of equations determined by the above augmented matrix has no solution. However, it has the same solution set as the first system of equations. This shows there is no solution to the three given equations. When this happens, the system is called inconsistent.
|
Yes
|
Example 1.10.4 Give the complete solution to the system of equations, \( {3x} - y - {5z} = 9 \) , \( y - {10z} = 0 \), and \( - {2x} + y = - 6 \) .
|
The augmented matrix of this system is\n\n\[ \left( \begin{matrix} 3 & - 1 & - 5 & 9 \\ 0 & 1 & - {10} & 0 \\ - 2 & 1 & 0 & - 6 \end{matrix}\right) \]\n\nReplace the last row with 2 times the top row added to 3 times the bottom row. This gives\n\n\[ \left( \begin{matrix} 3 & - 1 & - 5 & 9 \\ 0 & 1 & - {10} & 0 \\ 0 & 1 & - {10} & 0 \end{matrix}\right) \]\n\nNext take -1 times the middle row and add to the bottom.\n\n\[ \left( \begin{matrix} 3 & - 1 & - 5 & 9 \\ 0 & 1 & - {10} & 0 \\ 0 & 0 & 0 & 0 \end{matrix}\right) \]\n\nTake the middle row and add to the top and then divide the top row which results by 3 .\n\n\[ \left( \begin{matrix} 1 & 0 & - 5 & 3 \\ 0 & 1 & - {10} & 0 \\ 0 & 0 & 0 & 0 \end{matrix}\right) \]\n\nThis says \( y = {10z} \) and \( x = 3 + {5z} \) . Apparently \( z \) can equal any number. Therefore, the solution set of this system is \( x = 3 + {5t}, y = {10t} \), and \( z = t \) where \( t \) is completely arbitrary. The system has an infinite set of solutions and this is a good description of the solutions. This is what it is all about, finding the solutions to the system.
|
Yes
|
Example 1.10.7 Give the complete solution to the system of equations, \( - {41x} + {15y} = {168} \) , \( {109x} - {40y} = - {447}, - {3x} + y = {12} \), and \( {2x} + z = - 1 \) .
|
The augmented matrix is\n\n\[ \left( \begin{matrix} - {41} & {15} & 0 & {168} \\ {109} & - {40} & 0 & - {447} \\ - 3 & 1 & 0 & {12} \\ 2 & 0 & 1 & - 1 \end{matrix}\right) \]\n\nTo solve this multiply the top row by 109 , the second row by 41 , add the top row to the second row, and multiply the top row by \( 1/{109} \) . Note how this process combined several row operations. This yields\n\n\[ \left( \begin{matrix} - {41} & {15} & 0 & {168} \\ 0 & - 5 & 0 & - {15} \\ - 3 & 1 & 0 & {12} \\ 2 & 0 & 1 & - 1 \end{matrix}\right) \]\n\nNext take 2 times the third row and replace the fourth row by this added to 3 times the fourth row. Then take \( \left( {-{41}}\right) \) times the third row and replace the first row by this added to 3 times the first row. Then switch the third and the first rows. This yields\n\n\[ \left( \begin{matrix} {123} & - {41} & 0 & - {492} \\ 0 & - 5 & 0 & - {15} \\ 0 & 4 & 0 & {12} \\ 0 & 2 & 3 & {21} \end{matrix}\right) \]\n\nTake \( - 1/2 \) times the third row and add to the bottom row. Then take 5 times the third row and add to four times the second. Finally take 41 times the third row and add to 4 times the top row. This yields\n\n\[ \left( \begin{matrix} {492} & 0 & 0 & - {1476} \\ 0 & 0 & 0 & 0 \\ 0 & 4 & 0 & {12} \\ 0 & 0 & 3 & {15} \end{matrix}\right) \]\n\nIt follows \( x = \frac{-{1476}}{492} = - 3, y = 3 \) and \( z = 5 \) .
|
Yes
|
Find \( \left( {1,2,0, - 1}\right) \cdot \left( {0, i,2,3}\right) \) .
|
This equals \( 0 + 2\left( {-i}\right) + 0 + - 3 = - 3 - {2i} \)
|
Yes
|
Theorem 1.15.4 The inner product satisfies the inequality\n\n\[ \left| {\mathbf{a} \cdot \mathbf{b}}\right| \leq \left| \mathbf{a}\right| \left| \mathbf{b}\right| \]\n\nFurthermore equality is obtained if and only if one of \( \mathbf{a} \) or \( \mathbf{b} \) is a scalar multiple of the other.
|
Proof: First define \( \theta \in \mathbb{C} \) such that\n\n\[ \bar{\theta }\left( {\mathbf{a} \cdot \mathbf{b}}\right) = \left| {\mathbf{a} \cdot \mathbf{b}}\right| ,\left| \theta \right| = 1, \]\n\nand define a function of \( t \in \mathbb{R} \)\n\n\[ f\left( t\right) = \left( {\mathbf{a} + {t\theta }\mathbf{b}}\right) \cdot \left( {\mathbf{a} + {t\theta }\mathbf{b}}\right) . \]\n\nThen by (1.20), \( f\left( t\right) \geq 0 \) for all \( t \in \mathbb{R} \) . Also from (1.21),(1.22),(1.19), and (1.23)\n\n\[ f\left( t\right) = \mathbf{a} \cdot \left( {\mathbf{a} + {t\theta }\mathbf{b}}\right) + {t\theta }\mathbf{b} \cdot \left( {\mathbf{a} + {t\theta }\mathbf{b}}\right) \]\n\n\[ = \mathbf{a} \cdot \mathbf{a} + t\bar{\theta }\left( {\mathbf{a} \cdot \mathbf{b}}\right) + {t\theta }\left( {\mathbf{b} \cdot \mathbf{a}}\right) + {t}^{2}{\left| \theta \right| }^{2}\mathbf{b} \cdot \mathbf{b} \]\n\n\[ = {\left| \mathbf{a}\right| }^{2} + {2t}\operatorname{Re}\bar{\theta }\left( {\mathbf{a} \cdot \mathbf{b}}\right) + {\left| \mathbf{b}\right| }^{2}{t}^{2} = {\left| \mathbf{a}\right| }^{2} + {2t}\left| {\mathbf{a} \cdot \mathbf{b}}\right| + {\left| \mathbf{b}\right| }^{2}{t}^{2} \]\n\nNow if \( {\left| \mathbf{b}\right| }^{2} = 0 \) it must be the case that \( \mathbf{a} \cdot \mathbf{b} = 0 \) because otherwise, you could pick large negative values of \( t \) and violate \( f\left( t\right) \geq 0 \) . Therefore, in this case, the Cauchy Schwarz inequality holds. In the case that \( \left| \mathbf{b}\right| \neq 0, y = f\left( t\right) \) is a polynomial which opens up and therefore, if it is always nonnegative, its graph is like that illustrated in the following picture\n\n\n\nThen the quadratic formula requires that\n\n\[ \overset{\text{The discriminant }}{\overbrace{4{\left| \mathbf{a} \cdot \mathbf{b}\right| }^{2} - 4{\left| \mathbf{a}\right| }^{2}{\left| \mathbf{b}\right| }^{2}}} \leq 0 \]\n\nsince otherwise the function, \( f\left( t\right) \) would have two real zeros and would necessarily have a graph which dips below the \( t \) axis. This proves (1.24).\n\nIt is clear from the axioms of the inner product that equality holds in (1.24) whenever one of the vectors is a scalar multiple of the other. It only remains to verify this is the only way equality can occur. If either vector equals zero, then equality is obtained in (1.24) so it can be assumed both vectors are non zero. Then if equality is achieved, it follows \( f\left( t\right) \) has exactly one real zero because the discriminant vanishes. Therefore, for some value of \( t,\mathbf{a} + {t\theta }\mathbf{b} = \mathbf{0} \) showing that \( \mathbf{a} \) is a multiple of \( \mathbf{b} \) .
|
Yes
|
Theorem 1.15.5 (Triangle inequality) For \( \mathbf{a},\mathbf{b} \in {\mathbb{F}}^{n} \)\n\n\[ \left| {\mathbf{a} + \mathbf{b}}\right| \leq \left| \mathbf{a}\right| + \left| \mathbf{b}\right| \]\n\nand equality holds if and only if one of the vectors is a nonnegative scalar multiple of the other.
|
Proof: By properties of the inner product and the Cauchy Schwartz inequality,\n\n\[ {\left| \mathbf{a} + \mathbf{b}\right| }^{2} = \left( {\mathbf{a} + \mathbf{b}}\right) \cdot \left( {\mathbf{a} + \mathbf{b}}\right) = \left( {\mathbf{a} \cdot \mathbf{a}}\right) + \left( {\mathbf{a} \cdot \mathbf{b}}\right) + \left( {\mathbf{b} \cdot \mathbf{a}}\right) + \left( {\mathbf{b} \cdot \mathbf{b}}\right) \]\n\n\[ = {\left| \mathbf{a}\right| }^{2} + 2\operatorname{Re}\left( {\mathbf{a} \cdot \mathbf{b}}\right) + {\left| \mathbf{b}\right| }^{2} \leq {\left| \mathbf{a}\right| }^{2} + 2\left| {\mathbf{a} \cdot \mathbf{b}}\right| + {\left| \mathbf{b}\right| }^{2} \]\n\n\[ \leq {\left| \mathbf{a}\right| }^{2} + 2\left| \mathbf{a}\right| \left| \mathbf{b}\right| + {\left| \mathbf{b}\right| }^{2} = {\left( \left| \mathbf{a}\right| + \left| \mathbf{b}\right| \right) }^{2}. \]\n\nTaking square roots of both sides you obtain (1.25).\n\nIt remains to consider when equality occurs. If either vector equals zero, then that vector equals zero times the other vector and the claim about when equality occurs is verified. Therefore, it can be assumed both vectors are nonzero. To get equality in the second inequality above, Theorem 1.15.4 implies one of the vectors must be a multiple of the other. Say \( \mathbf{b} = \alpha \mathbf{a} \). Also, to get equality in the first inequality, \( \left( {\mathbf{a} \cdot \mathbf{b}}\right) \) must be a nonnegative real number. Thus\n\n\[ 0 \leq \left( {\mathbf{a} \cdot \mathbf{b}}\right) = \left( {\mathbf{a} \cdot \alpha \mathbf{a}}\right) = \bar{\alpha }{\left| \mathbf{a}\right| }^{2}. \]\n\nTherefore, \( \alpha \) must be a real number which is nonnegative.
|
Yes
|
Compute\n\n\[ \left( \begin{matrix} 1 & 2 & 1 & 3 \\ 0 & 2 & 1 & - 2 \\ 2 & 1 & 4 & 1 \end{matrix}\right) \left( \begin{array}{l} 1 \\ 2 \\ 0 \\ 1 \end{array}\right) \]
|
First of all, this is of the form \( \left( {3 \times 4}\right) \left( {4 \times 1}\right) \) and so the result should be a \( \left( {3 \times 1}\right) \) . Note how the inside numbers cancel. To get the entry in the second row and first and only column, compute\n\n\[ \mathop{\sum }\limits_{{k = 1}}^{4}{a}_{2k}{v}_{k} = {a}_{21}{v}_{1} + {a}_{22}{v}_{2} + {a}_{23}{v}_{3} + {a}_{24}{v}_{4} \]\n\n\[ = 0 \times 1 + 2 \times 2 + 1 \times 0 + \left( {-2}\right) \times 1 = 2\text{.} \]
|
No
|
Example 2.1.6 Multiply the following.\n\n\\[ \n\\left( \\begin{array}{lll} 1 & 2 & 1 \\\\ 0 & 2 & 1 \\end{array}\\right) \\left( \\begin{matrix} 1 & 2 & 0 \\\\ 0 & 3 & 1 \\\\ - 2 & 1 & 1 \\end{matrix}\\right) \n\\]
|
The first thing you need to check before doing anything else is whether it is possible to do the multiplication. The first matrix is a \( 2 \times 3 \) and the second matrix is a \( 3 \times 3 \) . Therefore,\n\nis it possible to multiply these matrices. According to the above discussion it should be a \( 2 \times 3 \) matrix of the form\n\n\\[ \n\\left( {\\overset{\\text{First column }}{\\overbrace{\\left( \\begin{array}{lll} 1 & 2 & 1 \\\\ 0 & 2 & 1 \\end{array}\\right) \\left( \\begin{matrix} 1 \\\\ 0 \\\\ - 2 \\end{matrix}\\right) }},\\overset{\\text{Second column }}{\\overbrace{\\left( \\begin{array}{lll} 1 & 2 & 1 \\\\ 0 & 2 & 1 \\end{array}\\right) \\left( \\begin{array}{l} 2 \\\\ 3 \\\\ 1 \\end{array}\\right) }},\\overset{\\text{Third column }}{\\overbrace{\\left( \\begin{array}{lll} 1 & 2 & 1 \\\\ 0 & 2 & 1 \\end{array}\\right) \\left( \\begin{array}{l} 0 \\\\ 1 \\\\ 1 \\end{array}\\right) }}}\\right) \n\\]\n\nYou know how to multiply a matrix times a vector and so you do so to obtain each of the three columns. Thus\n\n\\[ \n\\left( \\begin{array}{lll} 1 & 2 & 1 \\\\ 0 & 2 & 1 \\end{array}\\right) \\left( \\begin{matrix} 1 & 2 & 0 \\\\ 0 & 3 & 1 \\\\ - 2 & 1 & 1 \\end{matrix}\\right) = \\left( \\begin{array}{lll} - 1 & 9 & 3 \\\\ - 2 & 7 & 3 \\end{array}\\right) .\n\\]
|
Yes
|
Example 2.1.7 Multiply the following.\n\n\\[ \n\\left( \\begin{matrix} 1 & 2 & 0 \\\\ 0 & 3 & 1 \\\\ - 2 & 1 & 1 \\end{matrix}\\right) \\left( \\begin{array}{lll} 1 & 2 & 1 \\\\ 0 & 2 & 1 \\end{array}\\right) \n\\]
|
First check if it is possible. This is of the form \\( \\left( {3 \\times 3}\\right) \\left( {2 \\times 3}\\right) \\) . The inside numbers do not match and so you can't do this multiplication. This means that anything you write will be absolute nonsense because it is impossible to multiply these matrices in this order. Aren't they the same two matrices considered in the previous example? Yes they are. It is just that here they are in a different order. This shows something you must always remember about matrix multiplication.\n\n\\[ \n\\text{Order Matters!} \n\\]\n\nMatrix multiplication is not commutative. This is very different than multiplication of numbers!
|
Yes
|
Example 2.1.9 Multiply if possible \( \left( \begin{array}{ll} 1 & 2 \\ 3 & 1 \\ 2 & 6 \end{array}\right) \left( \begin{array}{lll} 2 & 3 & 1 \\ 7 & 6 & 2 \end{array}\right) \) .
|
First check to see if this is possible. It is of the form \( \left( {3 \times 2}\right) \left( {2 \times 3}\right) \) and since the inside numbers match, it must be possible to do this and the result should be a \( 3 \times 3 \) matrix. The answer is of the form\n\n\[ \left( {\left( \begin{array}{ll} 1 & 2 \\ 3 & 1 \\ 2 & 6 \end{array}\right) \left( \begin{array}{l} 2 \\ 7 \end{array}\right) ,\left( \begin{array}{ll} 1 & 2 \\ 3 & 1 \\ 2 & 6 \end{array}\right) \left( \begin{array}{l} 3 \\ 6 \end{array}\right) ,\left( \begin{array}{ll} 1 & 2 \\ 3 & 1 \\ 2 & 6 \end{array}\right) \left( \begin{array}{l} 1 \\ 2 \end{array}\right) }\right) \]\n\nwhere the commas separate the columns in the resulting product. Thus the above product equals\n\n\[ \left( \begin{matrix} {16} & {15} & 5 \\ {13} & {15} & 5 \\ {46} & {42} & {14} \end{matrix}\right) \]\n\na \( 3 \times 3 \) matrix as desired. In terms of the \( i{j}^{\text{th }} \) entries and the above definition, the entry in the third row and second column of the product should equal\n\n\[ \mathop{\sum }\limits_{j}{a}_{3k}{b}_{k2} = {a}_{31}{b}_{12} + {a}_{32}{b}_{22} = 2 \times 3 + 6 \times 6 = {42}. \]\n\nYou should try a few more such examples to verify the above definition in terms of the \( i{j}^{th} \) entries works for other entries.
|
Yes
|
Example 2.1.10 Multiply if possible \( \left( \begin{array}{ll} 1 & 2 \\ 3 & 1 \\ 2 & 6 \end{array}\right) \left( \begin{array}{lll} 2 & 3 & 1 \\ 7 & 6 & 2 \\ 0 & 0 & 0 \end{array}\right) \) .
|
This is not possible because it is of the form \( \left( {3 \times 2}\right) \left( {3 \times 3}\right) \) and the middle numbers don't match.
|
Yes
|
Example 2.1.11 Multiply if possible \( \left( \begin{array}{lll} 2 & 3 & 1 \\ 7 & 6 & 2 \\ 0 & 0 & 0 \end{array}\right) \left( \begin{array}{ll} 1 & 2 \\ 3 & 1 \\ 2 & 6 \end{array}\right) \) .
|
This is possible because in this case it is of the form \( \left( {3 \times 3}\right) \left( {3 \times 2}\right) \) and the middle numbers do match. When the multiplication is done it equals\n\n\[ \left( \begin{matrix} {13} & {13} \\ {29} & {32} \\ 0 & 0 \end{matrix}\right) \]
|
Yes
|
Example 2.1.12 Multiply if possible \( \left( \begin{array}{l} 1 \\ 2 \\ 1 \end{array}\right) \left( \begin{array}{llll} 1 & 2 & 1 & 0 \end{array}\right) \) .
|
In this case you are trying to do \( \left( {3 \times 1}\right) \left( {1 \times 4}\right) \) . The inside numbers match so you can do it. Verify\n\n\[ \left( \begin{array}{l} 1 \\ 2 \\ 1 \end{array}\right) \left( \begin{array}{llll} 1 & 2 & 1 & 0 \end{array}\right) = \left( \begin{array}{llll} 1 & 2 & 1 & 0 \\ 2 & 4 & 2 & 0 \\ 1 & 2 & 1 & 0 \end{array}\right) \]
|
Yes
|
Write the matrix which is associated with this directed graph and find the number of ways to go from 2 to 4 in three steps.
|
Here you need to use a \( 4 \times 4 \) matrix. The one you need is\n\n\[ \left( \begin{array}{llll} 0 & 1 & 1 & 0 \\ 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 1 \\ 0 & 1 & 0 & 1 \end{array}\right) \]\n\nThen to find the answer, you just need to multiply this matrix by itself three times and look at the entry in the second row and fourth column.\n\n\[ {\left( \begin{array}{llll} 0 & 1 & 1 & 0 \\ 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 1 \\ 0 & 1 & 0 & 1 \end{array}\right) }^{3} = \left( \begin{array}{llll} 1 & 3 & 2 & 1 \\ 2 & 1 & 0 & 1 \\ 3 & 3 & 1 & 2 \\ 1 & 2 & 1 & 1 \end{array}\right) \]\n\nThere is exactly one way to go from 2 to 4 in three steps.
|
Yes
|
Compare \( \left( \begin{array}{ll} 1 & 2 \\ 3 & 4 \end{array}\right) \left( \begin{array}{ll} 0 & 1 \\ 1 & 0 \end{array}\right) \) and \( \left( \begin{array}{ll} 0 & 1 \\ 1 & 0 \end{array}\right) \left( \begin{array}{ll} 1 & 2 \\ 3 & 4 \end{array}\right) \) .
|
The first product is\n\n\[ \left( \begin{array}{ll} 1 & 2 \\ 3 & 4 \end{array}\right) \left( \begin{array}{ll} 0 & 1 \\ 1 & 0 \end{array}\right) = \left( \begin{array}{ll} 2 & 1 \\ 4 & 3 \end{array}\right) \]\n\nthe second product is\n\n\[ \left( \begin{array}{ll} 0 & 1 \\ 1 & 0 \end{array}\right) \left( \begin{array}{ll} 1 & 2 \\ 3 & 4 \end{array}\right) = \left( \begin{array}{ll} 3 & 4 \\ 1 & 2 \end{array}\right) \]\n\nand you see these are not equal. Therefore, you cannot conclude that \( {AB} = {BA} \) for matrix multiplication.
|
Yes
|
Proposition 2.1.15 If all multiplications and additions make sense, the following hold for matrices, \( A, B, C \) and \( a, b \) scalars.\n\n\[ A\left( {{aB} + {bC}}\right) = a\left( {AB}\right) + b\left( {AC}\right) \]
|
Proof: Using the above definition of matrix multiplication,\n\n\[ {\left( A\left( aB + bC\right) \right) }_{ij} = \mathop{\sum }\limits_{k}{A}_{ik}{\left( aB + bC\right) }_{kj} \]\n\n\[ = \mathop{\sum }\limits_{k}{A}_{ik}\left( {a{B}_{kj} + b{C}_{kj}}\right) \]\n\n\[ = a\mathop{\sum }\limits_{k}{A}_{ik}{B}_{kj} + b\mathop{\sum }\limits_{k}{A}_{ik}{C}_{kj} \]\n\n\[ = a{\left( AB\right) }_{ij} + b{\left( AC\right) }_{ij} \]\n\n\[ = {\left( a\left( AB\right) + b\left( AC\right) \right) }_{ij} \]\n\nshowing that \( A\left( {B + C}\right) = {AB} + {AC} \) as claimed.
|
Yes
|
Lemma 2.1.17 Let \( A \) be an \( m \times n \) matrix and let \( B \) be a \( n \times p \) matrix. Then\n\n\[ \n{\left( AB\right) }^{T} = {B}^{T}{A}^{T} \n\]
|
Proof: From the definition,\n\n\[ \n{\left( {\left( AB\right) }^{T}\right) }_{ij} = {\left( AB\right) }_{ji} \n\]\n\n\[ \n= \mathop{\sum }\limits_{k}{A}_{jk}{B}_{ki} \n\]\n\n\[ \n= \mathop{\sum }\limits_{k}{\left( {B}^{T}\right) }_{ik}{\left( {A}^{T}\right) }_{kj} \n\]\n\n\[ \n= {\left( {B}^{T}{A}^{T}\right) }_{ij} \n\]
|
Yes
|
Lemma 2.1.21 Suppose \( A \) is an \( m \times n \) matrix and \( {I}_{n} \) is the \( n \times n \) identity matrix. Then \( A{I}_{n} = A \) . If \( {I}_{m} \) is the \( m \times m \) identity matrix, it also follows that \( {I}_{m}A = A \) .
|
\[ \n{\left( A{I}_{n}\right) }_{ij} = \mathop{\sum }\limits_{k}{A}_{ik}{\delta }_{kj} \n\] \n\[ \n= {A}_{ij} \n\] \nand so \( A{I}_{n} = A \) . The other case is left as an exercise for you.
|
No
|
Proposition 2.1.23 Suppose \( {AB} = {BA} = I \) . Then \( B = {A}^{-1} \) .
|
Proof: From the definition \( B \) is an inverse for \( A \) . Could there be another one \( {B}^{\prime } \) ?\n\n\[ \n{B}^{\prime } = {B}^{\prime }I = {B}^{\prime }\left( {AB}\right) = \left( {{B}^{\prime }A}\right) B = {IB} = B.\n\]\n\nThus, the inverse, if it exists, is unique.
|
Yes
|
Example 2.1.24 Let \( A = \left( \begin{array}{ll} 1 & 1 \\ 1 & 1 \end{array}\right) \) . Does \( A \) have an inverse?
|
One might think \( A \) would have an inverse because it does not equal zero. However, \[ \left( \begin{array}{ll} 1 & 1 \\ 1 & 1 \end{array}\right) \left( \begin{matrix} - 1 \\ 1 \end{matrix}\right) = \left( \begin{array}{l} 0 \\ 0 \end{array}\right) \] and if \( {A}^{-1} \) existed, this could not happen because you could multiply on the left by the inverse \( A \) and conclude the vector \( {\left( -1,1\right) }^{T} = {\left( 0,0\right) }^{T} \) . Thus the answer is that \( A \) does not have an inverse.
|
Yes
|
Example 2.1.27 Let \( A = \left( \begin{matrix} 1 & 2 & 2 \\ 1 & 0 & 2 \\ 3 & 1 & - 1 \end{matrix}\right) \) . Find \( {A}^{-1} \) .
|
Set up the augmented matrix \( \left( {A \mid I}\right) \)\n\n\[ \left( \begin{matrix} 1 & 2 & 2 & 1 & 0 & 0 \\ 1 & 0 & 2 & 0 & 1 & 0 \\ 3 & 1 & - 1 & 0 & 0 & 1 \end{matrix}\right) \]\n\nNext take \( \left( {-1}\right) \) times the first row and add to the second followed by \( \left( {-3}\right) \) times the first row added to the last. This yields\n\n\[ \left( \begin{matrix} 1 & 2 & 2 & 1 & 0 & 0 \\ 0 & - 2 & 0 & - 1 & 1 & 0 \\ 0 & - 5 & - 7 & - 3 & 0 & 1 \end{matrix}\right) \]\n\nThen take 5 times the second row and add to -2 times the last row.\n\n\[ \left( \begin{matrix} 1 & 2 & 2 & 1 & 0 & 0 \\ 0 & - {10} & 0 & - 5 & 5 & 0 \\ 0 & 0 & {14} & 1 & 5 & - 2 \end{matrix}\right) \]\n\nNext take the last row and add to (-7) times the top row. This yields\n\n\[ \left( \begin{matrix} - 7 & - {14} & 0 & - 6 & 5 & - 2 \\ 0 & - {10} & 0 & - 5 & 5 & 0 \\ 0 & 0 & {14} & 1 & 5 & - 2 \end{matrix}\right) \]\n\nNow take \( \left( {-7/5}\right) \) times the second row and add to the top.\n\n\[ \left( \begin{matrix} - 7 & 0 & 0 & 1 & - 2 & - 2 \\ 0 & - {10} & 0 & - 5 & 5 & 0 \\ 0 & 0 & {14} & 1 & 5 & - 2 \end{matrix}\right) \]\n\nFinally divide the top row by -7 , the second row by -10 and the bottom row by 14 which\n\nyields\n\n\[ \left( \begin{matrix} 1 & 0 & 0 & - \frac{1}{7} & \frac{2}{7} & \frac{2}{7} \\ 0 & 1 & 0 & \frac{1}{2} & - \frac{1}{2} & 0 \\ 0 & 0 & 1 & \frac{1}{14} & \frac{5}{14} & - \frac{1}{7} \end{matrix}\right) \]\n\nTherefore, the inverse is\n\n\[ \left( \begin{matrix} - \frac{1}{7} & \frac{2}{7} & \frac{2}{7} \\ \frac{1}{2} & - \frac{1}{2} & 0 \\ \frac{1}{14} & \frac{5}{14} & - \frac{1}{7} \end{matrix}\right) \]
|
Yes
|
Example 2.1.28 Let \( A = \left( \begin{array}{lll} 1 & 2 & 2 \\ 1 & 0 & 2 \\ 2 & 2 & 4 \end{array}\right) \) . Find \( {A}^{-1} \) .
|
Write the augmented matrix \( \left( {A \mid I}\right) \)\n\n\[ \left( \begin{array}{llllll} 1 & 2 & 2 & 1 & 0 & 0 \\ 1 & 0 & 2 & 0 & 1 & 0 \\ 2 & 2 & 4 & 0 & 0 & 1 \end{array}\right) \]\n\nand proceed to do row operations attempting to obtain \( \left( {I \mid {A}^{-1}}\right) \) . Take \( \left( {-1}\right) \) times the top row and add to the second. Then take \( \left( {-2}\right) \) times the top row and add to the bottom.\n\n\[ \left( \begin{matrix} 1 & 2 & 2 & 1 & 0 & 0 \\ 0 & - 2 & 0 & - 1 & 1 & 0 \\ 0 & - 2 & 0 & - 2 & 0 & 1 \end{matrix}\right) \]\n\nNext add \( \left( {-1}\right) \) times the second row to the bottom row.\n\n\[ \left( \begin{matrix} 1 & 2 & 2 & 1 & 0 & 0 \\ 0 & - 2 & 0 & - 1 & 1 & 0 \\ 0 & 0 & 0 & - 1 & - 1 & 1 \end{matrix}\right) \]\n\nAt this point, you can see there will be no inverse because you have obtained a row of zeros in the left half of the augmented matrix \( \left( {A \mid I}\right) \) . Thus there will be no way to obtain \( I \) on the left. In other words, the three systems of equations you must solve to find the inverse have no solution. In particular, there is no solution for the first column of \( {A}^{-1} \) which must\n\nsolve\n\[ A\left( \begin{array}{l} x \\ y \\ z \end{array}\right) = \left( \begin{array}{l} 1 \\ 0 \\ 0 \end{array}\right) \]\n\nbecause a sequence of row operations leads to the impossible equation, \( {0x} + {0y} + {0z} = - 1 \) .
|
Yes
|
Lemma 2.3.3 Let \( \\mathbf{v} \\in {\\mathbb{F}}^{n} \) . Thus \( \\mathbf{v} \) is a list of numbers arranged vertically, \( {v}_{1},\\cdots ,{v}_{n} \) . Then\n\n\[ \n{\\mathbf{e}}_{i}^{T}\\mathbf{v} = {v}_{i} \n\]\n\n\( \\left( {2.20}\\right) \)\n\nAlso, if \( A \) is an \( m \\times n \) matrix, then letting \( {\\mathbf{e}}_{i} \\in {\\mathbb{F}}^{m} \) and \( {\\mathbf{e}}_{j} \\in {\\mathbb{F}}^{n} \) ,\n\n\[ \n{\\mathbf{e}}_{i}^{T}A{\\mathbf{e}}_{j} = {A}_{ij} \n\]\n\n\( \\left( {2.21}\\right) \)
|
Proof: First note that \( {\\mathbf{e}}_{i}^{T} \) is a \( 1 \\times n \) matrix and \( \\mathbf{v} \) is an \( n \\times 1 \) matrix so the above multiplication in (2.20) makes perfect sense. It equals\n\n\[ \n\\left( {0,\\cdots ,1,\\cdots 0}\\right) \\left( \\begin{matrix} {v}_{1} \\\\ \\vdots \\\\ {v}_{i} \\\\ \\vdots \\\\ {v}_{n} \\end{matrix}\\right) = {v}_{i} \n\]\n\nas claimed.\n\nConsider (2.21). From the definition of matrix multiplication, and noting that \( {\\left( {\\mathbf{e}}_{j}\\right) }_{k} = \) \( {\\delta }_{kj} \)\n\n\[ \n{\\mathbf{e}}_{i}^{T}A{\\mathbf{e}}_{j} = {\\mathbf{e}}_{i}^{T}\\left( \\begin{matrix} \\mathop{\\sum }\\limits_{k}{A}_{1k}{\\left( {\\mathbf{e}}_{j}\\right) }_{k} \\\\ \\vdots \\\\ \\mathop{\\sum }\\limits_{k}{A}_{ik}{\\left( {\\mathbf{e}}_{j}\\right) }_{k} \\\\ \\vdots \\\\ \\mathop{\\sum }\\limits_{k}{A}_{mk}{\\left( {\\mathbf{e}}_{j}\\right) }_{k} \\end{matrix}\\right) = {\\mathbf{e}}_{i}^{T}\\left( \\begin{matrix} {A}_{1j} \\\\ \\vdots \\\\ {A}_{ij} \\\\ \\vdots \\\\ {A}_{mj} \\end{matrix}\\right) = {A}_{ij} \n\]\n\nby the first part of the lemma.
|
Yes
|
Theorem 2.3.4 Let \( L : {\mathbb{F}}^{n} \rightarrow {\mathbb{F}}^{m} \) be a linear transformation. Then there exists a unique \( m \times n \) matrix \( A \) such that\n\n\[ A\mathbf{x} = L\mathbf{x} \]\n\nfor all \( \mathbf{x} \in {\mathbb{F}}^{n} \) . The \( i{k}^{\text{th }} \) entry of this matrix is given by\n\n\[ {\mathbf{e}}_{i}^{T}L{\mathbf{e}}_{k} \]\n\n\( \left( {2.22}\right) \)\n\nStated in another way, the \( {k}^{\text{th }} \) column of \( A \) equals \( L{\mathbf{e}}_{\mathbf{k}} \) .
|
Proof: By the lemma,\n\n\[ {\left( L\mathbf{x}\right) }_{i} = {\mathbf{e}}_{i}^{T}L\mathbf{x} = {\mathbf{e}}_{i}^{T}{x}_{k}L{\mathbf{e}}_{k} = \left( {{\mathbf{e}}_{i}^{T}L{\mathbf{e}}_{k}}\right) {x}_{k}. \]\n\nLet \( {A}_{ik} = {\mathbf{e}}_{i}^{T}L{\mathbf{e}}_{k} \), to prove the existence part of the theorem.\n\nTo verify uniqueness, suppose \( B\mathbf{x} = A\mathbf{x} = L\mathbf{x} \) for all \( \mathbf{x} \in {\mathbb{F}}^{n} \) . Then in particular, this is true for \( \mathbf{x} = {\mathbf{e}}_{j} \) and then multiply on the left by \( {\mathbf{e}}_{i}^{T} \) to obtain\n\n\[ {B}_{ij} = {\mathbf{e}}_{i}^{T}B{\mathbf{e}}_{j} = {\mathbf{e}}_{i}^{T}A{\mathbf{e}}_{j} = {A}_{ij} \]\n\nshowing \( A = B \) .
|
Yes
|
Corollary 2.3.5 A linear transformation, \( L : {\mathbb{F}}^{n} \rightarrow {\mathbb{F}}^{m} \) is completely determined by the vectors \( \left\{ {L{\mathbf{e}}_{1},\cdots, L{\mathbf{e}}_{n}}\right\} \) .
|
Proof: This follows immediately from the above theorem. The unique matrix determining the linear transformation which is given in (2.22) depends only on these vectors.
|
No
|
Find the linear transformation, \( L : {\mathbb{R}}^{2} \rightarrow {\mathbb{R}}^{2} \) which has the property that \( L{\mathbf{e}}_{1} = \left( \begin{array}{l} 2 \\ 1 \end{array}\right) \) and \( L{\mathbf{e}}_{2} = \left( \begin{array}{l} 1 \\ 3 \end{array}\right) \).
|
From the above theorem and corollary, this linear transformation is that determined by matrix multiplication by the matrix \[ \left( \begin{array}{ll} 2 & 1 \\ 1 & 3 \end{array}\right) \]
|
Yes
|
Theorem 2.3.8 Let \( A \) be an \( m \times n \) matrix where \( m < n \) . Then \( N\left( A\right) \) contains nonzero vectors.
|
Proof: First consider the case where \( A \) is a \( 1 \times n \) matrix for \( n > 1 \) . Say\n\n\[ A = \left( \begin{array}{lll} {a}_{1} & \cdots & {a}_{n} \end{array}\right) \]\n\nIf \( {a}_{1} = 0 \), consider the vector \( \mathbf{x} = {\mathbf{e}}_{1} \) . If \( {a}_{1} \neq 0 \), let\n\n\[ \mathbf{x} = \left( \begin{matrix} b \\ 1 \\ \vdots \\ 1 \end{matrix}\right) \]\n\nwhere \( b \) is chosen to satisfy the equation\n\n\[ {a}_{1}b + \mathop{\sum }\limits_{{k = 2}}^{n}{a}_{k} = 0 \]\n\nSuppose now that the theorem is true for any \( m \times n \) matrix with \( n > m \) and consider an \( \left( {m \times 1}\right) \times n \) matrix \( A \) where \( n > m + 1 \) . If the first column of \( A \) is \( \mathbf{0} \), then you could let \( \mathbf{x} = {\mathbf{e}}_{1} \) as above. If the first column is not the zero vector, then by doing row operations, the equation \( A\mathbf{x} = \mathbf{0} \) can be reduced to the equivalent system\n\n\[ {A}_{1}\mathbf{x} = \mathbf{0} \]\n\nwhere \( {A}_{1} \) is of the form\n\n\[ {A}_{1} = \left( \begin{matrix} 1 & {\mathbf{a}}^{T} \\ \mathbf{0} & B \end{matrix}\right) \]\n\nwhere \( B \) is an \( m \times \left( {n - 1}\right) \) matrix. Since \( n > m + 1 \), it follows that \( \left( {n - 1}\right) > m \) and so by induction, there exists a nonzero vector \( \mathbf{y} \in {\mathbb{F}}^{n - 1} \) such that \( B\mathbf{y} = \mathbf{0} \) . Then consider the\n\nvector\n\n\[ \mathbf{x} = \left( \begin{array}{l} b \\ \mathbf{y} \end{array}\right) \]\n\n\( {A}_{1}\mathbf{x} \) has for its top entry the expression \( b + {\mathbf{a}}^{T}\mathbf{y} \) . Letting \( B = \left( \begin{matrix} {\mathbf{b}}_{1}^{T} \\ \vdots \\ {\mathbf{b}}_{m}^{T} \end{matrix}\right) \), the \( {i}^{\text{th }} \) entry of \( {A}_{1}\mathbf{x} \) for \( i > 1 \) is of the form \( {\mathbf{b}}_{i}^{T}\mathbf{y} = 0 \) . Thus if \( b \) is chosen to satisfy the equation \( b + {\mathbf{a}}^{T}\mathbf{y} = 0 \) , then \( {A}_{1}\mathbf{x} = \mathbf{0} \) . \( \blacksquare \)
|
Yes
|
Proposition 2.4.2 Let \( V \subseteq {\mathbb{F}}^{n} \) . Then \( V \) is a subspace if and only if it is a vector space itself with respect to the same operations of scalar multiplication and vector addition.
|
Proof: Suppose first that \( V \) is a subspace. All algebraic properties involving scalar multiplication and vector addition hold for \( V \) because these things hold for \( {\mathbb{F}}^{n} \) . Is \( \mathbf{0} \in V \) ? Yes it is. This is because \( 0\mathbf{v} \in V \) and \( 0\mathbf{v} = \mathbf{0} \) . By assumption, for \( \alpha \) a scalar and \( \mathbf{v} \in V,\alpha \mathbf{v} \in V \) . Therefore, \( - \mathbf{v} = \left( {-1}\right) \mathbf{v} \in V \) . Thus \( V \) has the additive identity and additive inverse. By assumption, \( V \) is closed with respect to the two operations. Thus \( V \) is a vector space. If \( V \subseteq {\mathbb{F}}^{n} \) is a vector space, then by definition, if \( \alpha ,\beta \) are scalars and \( \mathbf{u},\mathbf{v} \) vectors in \( V \), it follows that \( \alpha \mathbf{v} + \beta \mathbf{u} \in V \) .
|
Yes
|
Lemma 2.4.3 A set of vectors \( \left\{ {{\mathbf{x}}_{1},\cdots ,{\mathbf{x}}_{p}}\right\} \) is linearly independent if and only if none of the vectors can be obtained as a linear combination of the others.
|
Proof: Suppose first that \( \left\{ {{\mathbf{x}}_{1},\cdots ,{\mathbf{x}}_{p}}\right\} \) is linearly independent. If \( {\mathbf{x}}_{k} = \mathop{\sum }\limits_{{j \neq k}}{c}_{j}{\mathbf{x}}_{j} \), then\n\n\[ \mathbf{0} = 1{\mathbf{x}}_{k} + \mathop{\sum }\limits_{{j \neq k}}\left( {-{c}_{j}}\right) {\mathbf{x}}_{j} \]\n\na nontrivial linear combination, contrary to assumption. This shows that if the set is linearly independent, then none of the vectors is a linear combination of the others.\n\nNow suppose no vector is a linear combination of the others. Is \( \left\{ {{\mathbf{x}}_{1},\cdots ,{\mathbf{x}}_{p}}\right\} \) linearly independent? If it is not, there exist scalars \( {c}_{i} \), not all zero such that\n\n\[ \mathop{\sum }\limits_{{i = 1}}^{p}{c}_{i}{\mathbf{x}}_{i} = \mathbf{0} \]\n\nSay \( {c}_{k} \neq 0 \) . Then you can solve for \( {\mathbf{x}}_{k} \) as\n\n\[ {\mathbf{x}}_{k} = \mathop{\sum }\limits_{{j \neq k}}\left( {-{c}_{j}}\right) /{c}_{k}{\mathbf{x}}_{j} \]\n\ncontrary to assumption.
|
Yes
|
Theorem 2.4.4 (Exchange Theorem) Let \( \\left\\{ {{\\mathbf{x}}_{1},\\cdots ,{\\mathbf{x}}_{r}}\\right\\} \) be a linearly independent set of vectors such that each \( {\\mathbf{x}}_{i} \) is in \( \\operatorname{span}\\left( {{\\mathbf{y}}_{1},\\cdots ,{\\mathbf{y}}_{s}}\\right) \) . Then \( r \\leq s \) .
|
Proof 1: Suppose not. Then \( r > s \) . By assumption, there exist scalars \( {a}_{ji} \) such that\n\n\[ \n{\\mathbf{x}}_{i} = \\mathop{\\sum }\\limits_{{j = 1}}^{s}{a}_{ji}{\\mathbf{y}}_{j} \n\]\n\nThe matrix whose \( j{i}^{th} \) entry is \( {a}_{ji} \) has more columns than rows. Therefore, by Theorem 2.3.8 there exists a nonzero vector \( \\mathbf{b} \\in {\\mathbb{F}}^{r} \) such that \( A\\mathbf{b} = \\mathbf{0} \) . Thus\n\n\[ \n0 = \\mathop{\\sum }\\limits_{{i = 1}}^{r}{a}_{ji}{b}_{i},\\operatorname{each}j \n\]\n\nThen\n\[ \n\\mathop{\\sum }\\limits_{{i = 1}}^{r}{b}_{i}{\\mathbf{x}}_{i} = \\mathop{\\sum }\\limits_{{i = 1}}^{r}{b}_{i}\\mathop{\\sum }\\limits_{{j = 1}}^{s}{a}_{ji}{\\mathbf{y}}_{j} = \\mathop{\\sum }\\limits_{{j = 1}}^{s}\\left( {\\mathop{\\sum }\\limits_{{i = 1}}^{r}{a}_{ji}{b}_{i}}\\right) {\\mathbf{y}}_{j} = \\mathbf{0} \n\]\n\ncontradicting the assumption that \( \\left\\{ {{\\mathbf{x}}_{1},\\cdots ,{\\mathbf{x}}_{r}}\\right\\} \) is linearly independent.
|
Yes
|
Corollary 2.4.6 Let \( \left\{ {{\mathbf{x}}_{1},\cdots ,{\mathbf{x}}_{r}}\right\} \) and \( \left\{ {{\mathbf{y}}_{1},\cdots ,{\mathbf{y}}_{s}}\right\} \) be two bases \( {}^{m} \) of \( {\mathbb{F}}^{n} \) . Then \( r = s = n \) .
|
Proof: From the exchange theorem, \( r \leq s \) and \( s \leq r \) . Now note the vectors,\n\n\[ \n{\mathbf{e}}_{i} = \overset{1\text{ is in the }{i}^{\text{th }}\text{ slot }}{\overbrace{\left( 0,\cdots ,0,1,0\cdots ,0\right) }} \n\]\n\nfor \( i = 1,2,\cdots, n \) are a basis for \( {\mathbb{F}}^{n} \) . β
|
No
|
Lemma 2.4.7 Let \( \\left\\{ {{\\mathbf{v}}_{1},\\cdots ,{\\mathbf{v}}_{r}}\\right\\} \) be a set of vectors. Then \( V \\equiv \\operatorname{span}\\left( {{\\mathbf{v}}_{1},\\cdots ,{\\mathbf{v}}_{r}}\\right\\} \) is a subspace.
|
Proof: Suppose \( \\alpha ,\\beta \) are two scalars and let \( \\mathop{\\sum }\\limits_{{k = 1}}^{r}{c}_{k}{\\mathbf{v}}_{k} \) and \( \\mathop{\\sum }\\limits_{{k = 1}}^{r}{d}_{k}{\\mathbf{v}}_{k} \) are two elements\n\nof \( V \) . What about\n\[ \n\\alpha \\mathop{\\sum }\\limits_{{k = 1}}^{r}{c}_{k}{\\mathbf{v}}_{k} + \\beta \\mathop{\\sum }\\limits_{{k = 1}}^{r}{d}_{k}{\\mathbf{v}}_{k}?\n\]\n\nIs it also in \( V \) ?\n\n\[ \n\\alpha \\mathop{\\sum }\\limits_{{k = 1}}^{r}{c}_{k}{\\mathbf{v}}_{k} + \\beta \\mathop{\\sum }\\limits_{{k = 1}}^{r}{d}_{k}{\\mathbf{v}}_{k} = \\mathop{\\sum }\\limits_{{k = 1}}^{r}\\left( {\\alpha {c}_{k} + \\beta {d}_{k}}\\right) {\\mathbf{v}}_{k} \\in V\n\]\n\nso the answer is yes. -
|
Yes
|
Corollary 2.4.9 Let \( \\left\\{ {{\\mathbf{x}}_{1},\\cdots ,{\\mathbf{x}}_{r}} \\right\\} \) and \( \\left\\{ {{\\mathbf{y}}_{1},\\cdots ,{\\mathbf{y}}_{s}} \\right\\} \) be two bases for \( V \) . Then \( r = s \) .
|
Proof: From the exchange theorem, \( r \\leq s \) and \( s \\leq r \) .
|
Yes
|
Lemma 2.4.11 Suppose \( \mathbf{v} \notin \operatorname{span}\left( {{\mathbf{u}}_{1},\cdots ,{\mathbf{u}}_{k}}\right) \) and \( \left\{ {{\mathbf{u}}_{1},\cdots ,{\mathbf{u}}_{k}}\right\} \) is linearly independent. Then \( \left\{ {{\mathbf{u}}_{1},\cdots ,{\mathbf{u}}_{k},\mathbf{v}}\right\} \) is also linearly independent.
|
Proof: Suppose \( \mathop{\sum }\limits_{{i = 1}}^{k}{c}_{i}{\mathbf{u}}_{i} + d\mathbf{v} = \mathbf{0} \) . It is required to verify that each \( {c}_{i} = 0 \) and that \( d = 0 \) . But if \( d \neq 0 \), then you can solve for \( \mathbf{v} \) as a linear combination of the vectors, \( \left\{ {{\mathbf{u}}_{1},\cdots ,{\mathbf{u}}_{k}}\right\} \n\n\[ \mathbf{v} = - \mathop{\sum }\limits_{{i = 1}}^{k}\left( \frac{{c}_{i}}{d}\right) {\mathbf{u}}_{i} \]\n\ncontrary to assumption. Therefore, \( d = 0 \) . But then \( \mathop{\sum }\limits_{{i = 1}}^{k}{c}_{i}{\mathbf{u}}_{i} = 0 \) and the linear independence of \( \left\{ {{\mathbf{u}}_{1},\cdots ,{\mathbf{u}}_{k}}\right\} \) implies each \( {c}_{i} = 0 \) also. -
|
Yes
|
Theorem 2.4.12 Let \( V \) be a nonzero subspace of \( {\mathbb{F}}^{n} \). Then \( V \) has a basis.
|
Proof: Let \( {\mathbf{v}}_{1} \in V \) where \( {\mathbf{v}}_{1} \neq 0 \). If \( \operatorname{span}\left\{ {\mathbf{v}}_{1}\right\} = V \), stop. \( \left\{ {\mathbf{v}}_{1}\right\} \) is a basis for \( V \). Otherwise, there exists \( {\mathbf{v}}_{2} \in V \) which is not in span \( \left\{ {\mathbf{v}}_{1}\right\} \). By Lemma 2.4.11 \( \left\{ {{\mathbf{v}}_{1},{\mathbf{v}}_{2}}\right\} \) is a linearly independent set of vectors. If \( \operatorname{span}\left\{ {{\mathbf{v}}_{1},{\mathbf{v}}_{2}}\right\} = V \) stop, \( \left\{ {{\mathbf{v}}_{1},{\mathbf{v}}_{2}}\right\} \) is a basis for \( V \). If \( \operatorname{span}\left\{ {{\mathbf{v}}_{1},{\mathbf{v}}_{2}}\right\} \neq V \), then there exists \( {\mathbf{v}}_{3} \notin \operatorname{span}\left\{ {{\mathbf{v}}_{1},{\mathbf{v}}_{2}}\right\} \) and \( \left\{ {{\mathbf{v}}_{1},{\mathbf{v}}_{2},{\mathbf{v}}_{3}}\right\} \) is a larger linearly independent set of vectors. Continuing this way, the process must stop before \( n + 1 \) steps because if not, it would be possible to obtain \( n + 1 \) linearly independent vectors contrary to the exchange theorem.
|
Yes
|
Corollary 2.4.13 Let \( V \) be a subspace of \( {\mathbb{F}}^{n} \) and let \( \left\{ {{\mathbf{v}}_{1},\cdots ,{\mathbf{v}}_{r}}\right\} \) be a linearly independent set of vectors in \( V \) . Then either it is a basis for \( V \) or there exist vectors, \( {\mathbf{v}}_{r + 1},\cdots ,{\mathbf{v}}_{s} \) such that \( \left\{ {{\mathbf{v}}_{1},\cdots ,{\mathbf{v}}_{r},{\mathbf{v}}_{r + 1},\cdots ,{\mathbf{v}}_{s}}\right\} \) is a basis for \( V \) .
|
Proof: This follows immediately from the proof of Theorem 2.4.12. You do exactly the same argument except you start with \( \left\{ {{\mathbf{v}}_{1},\cdots ,{\mathbf{v}}_{r}}\right\} \) rather than \( \left\{ {\mathbf{v}}_{1}\right\} \) .
|
No
|
Theorem 2.4.14 Let \( V \) be a subspace of \( {\mathbb{F}}^{n} \) and suppose \( \operatorname{span}\left( {{\mathbf{u}}_{1}\cdots ,{\mathbf{u}}_{p}}\right) = V \) where the \( {\mathbf{u}}_{i} \) are nonzero vectors. Then there exist vectors \( \left\{ {{\mathbf{v}}_{1}\cdots ,{\mathbf{v}}_{r}}\right\} \) such that \( \left\{ {{\mathbf{v}}_{1}\cdots ,{\mathbf{v}}_{r}}\right\} \subseteq \) \( \left\{ {{\mathbf{u}}_{1}\cdots ,{\mathbf{u}}_{p}}\right\} \) and \( \left\{ {{\mathbf{v}}_{1}\cdots ,{\mathbf{v}}_{r}}\right\} \) is a basis for \( V \) .
|
Proof: Let \( r \) be the smallest positive integer with the property that for some set \( \left\{ {{\mathbf{v}}_{1}\cdots ,{\mathbf{v}}_{r}}\right\} \subseteq \left\{ {{\mathbf{u}}_{1}\cdots ,{\mathbf{u}}_{p}}\right\} \)\n\n\[ \operatorname{span}\left( {{\mathbf{v}}_{1}\cdots ,{\mathbf{v}}_{r}}\right) = V \]\n\nThen \( r \leq p \) and it must be the case that \( \left\{ {{\mathbf{v}}_{1}\cdots ,{\mathbf{v}}_{r}}\right\} \) is linearly independent because if it were not so, one of the vectors, say \( {\mathbf{v}}_{k} \) would be a linear combination of the others. But then you could delete this vector from \( \left\{ {{\mathbf{v}}_{1}\cdots ,{\mathbf{v}}_{r}}\right\} \) and the resulting list of \( r - 1 \) vectors would still span \( V \) contrary to the definition of \( r \) .
|
Yes
|
Lemma 2.6.2 Let \( A\left( t\right) \) be an \( m \times n \) matrix and let \( B\left( t\right) \) be an \( n \times p \) matrix with the property that all the entries of these matrices are differentiable functions. Then\n\n\[ \n{\left( A\left( t\right) B\left( t\right) \right) }^{\prime } = {A}^{\prime }\left( t\right) B\left( t\right) + A\left( t\right) {B}^{\prime }\left( t\right) .\n\]
|
Proof: This is like the usual proof.\n\n\[ \n\frac{1}{h}\left( {A\left( {t + h}\right) B\left( {t + h}\right) - A\left( t\right) B\left( t\right) }\right) = \n\]\n\n\[ \n\frac{1}{h}\left( {A\left( {t + h}\right) B\left( {t + h}\right) - A\left( {t + h}\right) B\left( t\right) }\right) + \frac{1}{h}\left( {A\left( {t + h}\right) B\left( t\right) - A\left( t\right) B\left( t\right) }\right) \n\]\n\n\[ \n= A\left( {t + h}\right) \frac{B\left( {t + h}\right) - B\left( t\right) }{h} + \frac{A\left( {t + h}\right) - A\left( t\right) }{h}B\left( t\right) \n\]\n\nand now, using the fact that the entries of the matrices are all differentiable, one can pass to a limit in both sides as \( h \rightarrow 0 \) and conclude that\n\n\[ \n{\left( A\left( t\right) B\left( t\right) \right) }^{\prime } = {A}^{\prime }\left( t\right) B\left( t\right) + A\left( t\right) {B}^{\prime }\left( t\right) \blacksquare \n\]
|
Yes
|
Theorem 2.6.4 Let \( \mathbf{i}\left( t\right) ,\mathbf{j}\left( t\right) ,\mathbf{k}\left( t\right) \) be as described. Then there exists a unique vector \( \mathbf{\Omega }\left( t\right) \) such that if \( \mathbf{u}\left( t\right) \) is a vector whose components are constant with respect to \( \mathbf{i}\left( t\right) ,\mathbf{j}\left( t\right) ,\mathbf{k}\left( t\right) \) , then\n\n\[{\mathbf{u}}^{\prime }\left( t\right) = \mathbf{\Omega }\left( t\right) \times \mathbf{u}\left( t\right)\]
|
Proof: It only remains to prove uniqueness. Suppose \( {\mathbf{\Omega }}_{1} \) also works. Then \( \mathbf{u}\left( t\right) = Q\left( t\right) \mathbf{u} \) and so \( {\mathbf{u}}^{\prime }\left( t\right) = {Q}^{\prime }\left( t\right) \mathbf{u} \) and\n\n\[{Q}^{\prime }\left( t\right) \mathbf{u} = \mathbf{\Omega } \times Q\left( t\right) \mathbf{u} = {\mathbf{\Omega }}_{1} \times Q\left( t\right) \mathbf{u}\]\n\nfor all \( \mathbf{u} \) . Therefore,\n\n\[\left( {\mathbf{\Omega } - {\mathbf{\Omega }}_{1}}\right) \times Q\left( t\right) \mathbf{u} = \mathbf{0}\]\n\nfor all \( \mathbf{u} \) and since \( Q\left( t\right) \) is one to one and onto, this implies \( \left( {\mathbf{\Omega } - {\mathbf{\Omega }}_{1}}\right) \times \mathbf{w} = \mathbf{0} \) for all \( \mathbf{w} \) and thus \( \mathbf{\Omega } - {\mathbf{\Omega }}_{1} = \mathbf{0} \) . \( \blacksquare \)
|
Yes
|
Example 2.6.5 Suppose a rock is dropped from a tall building. Where will it strike?
|
Assume \( \mathbf{a} = - g\mathbf{k} \) and the \( \mathbf{j} \) component of \( {\mathbf{a}}_{B} \) is approximately\n\n\[- {2\omega }\left( {{x}^{\prime }\cos \phi + {z}^{\prime }\sin \phi }\right) .\n\]\n\nThe dominant term in this expression is clearly the second one because \( {x}^{\prime } \) will be small. Also, the \( \mathbf{i} \) and \( \mathbf{k} \) contributions will be very small. Therefore, the following equation is descriptive of the situation.\n\n\[{\mathbf{a}}_{B} = - g\mathbf{k} - 2{z}^{\prime }\omega \sin \phi \mathbf{j}.\n\]\n\n\( {z}^{\prime } = - {gt} \) approximately. Therefore, considering the \( \mathbf{j} \) component, this is\n\n\[{2gt\omega }\sin \phi \text{.}\n\]\n\nTwo integrations give \( \left( {{\omega g}{t}^{3}/3}\right) \sin \phi \) for the \( \mathbf{j} \) component of the relative displacement at time \( t \) .\n\nThis shows the rock does not fall directly towards the center of the earth as expected but slightly to the east.
|
Yes
|
Example 3.1.2 Find \( \\det \\left( \\begin{matrix} 2 & 4 \\\\ - 1 & 6 \\end{matrix}\\right) \) .
|
From the definition this is just \( \\left( 2\\right) \\left( 6\\right) - \\left( {-1}\\right) \\left( 4\\right) = {16} \) .
|
Yes
|
Theorem 3.1.4 Let \( A \) be an \( n \times n \) matrix where \( n \geq 2 \). Then\n\n\[ \det \left( A\right) = \mathop{\sum }\limits_{{j = 1}}^{n}{a}_{ij}\operatorname{cof}{\left( A\right) }_{ij} = \mathop{\sum }\limits_{{i = 1}}^{n}{a}_{ij}\operatorname{cof}{\left( A\right) }_{ij}. \]
|
The first formula consists of expanding the determinant along the \( {i}^{\text{th }} \) row and the second expands the determinant along the \( {j}^{\text{th }} \) column.\n\nNote that for a \( n \times n \) matrix, you will need \( n \) ! terms to evaluate the determinant in this way. If \( n = {10} \), this is \( {10}! = 3,{628},{800} \) terms. This is a lot of terms.\n\nIn addition to the difficulties just discussed, why is the determinant well defined? Why should you get the same thing when you expand along any row or column? I think you should regard this claim that you always get the same answer by picking any row or column with considerable skepticism. It is incredible and not at all obvious. However, it requires a little effort to establish it. This is done in the section on the theory of the determinant which follows.
|
No
|
Corollary 3.1.6 Let \( M \) be an upper (lower) triangular matrix. Then \( \det \left( M\right) \) is obtained by taking the product of the entries on the main diagonal.
|
Proof: The corollary is true if the matrix is one to one. Suppose it is \( n \times n \) . Then the\n\nmatrix is of the form\n\[\n\left( \begin{matrix} {m}_{11} & \mathbf{a} \\ \mathbf{0} & {M}_{1} \end{matrix}\right)\n\]\n\nwhere \( {M}_{1} \) is \( \left( {n - 1}\right) \times \left( {n - 1}\right) \) . Then expanding along the first row, you get \( {m}_{11}\det \left( {M}_{1}\right) + 0 \) . Then use the induction hypothesis to obtain that \( \det \left( {M}_{1}\right) = \mathop{\prod }\limits_{{i = 2}}^{n}{m}_{ii} \) . \( \blacksquare \)
|
Yes
|
Find \( \det \left( A\right) \) .
|
From the above corollary, this is -6 .
|
No
|
Find the determinant of the matrix\n\n\[ A = \left( \begin{matrix} 1 & 2 & 3 & 4 \\ 5 & 1 & 2 & 3 \\ 4 & 5 & 4 & 3 \\ 2 & 2 & - 4 & 5 \end{matrix}\right) \]
|
Replace the second row by \( \left( {-5}\right) \) times the first row added to it. Then replace the third row by \( \left( {-4}\right) \) times the first row added to it. Finally, replace the fourth row by \( \left( {-2}\right) \) times the first row added to it. This yields the matrix\n\n\[ B = \left( \begin{matrix} 1 & 2 & 3 & 4 \\ 0 & - 9 & - {13} & - {17} \\ 0 & - 3 & - 8 & - {13} \\ 0 & - 2 & - {10} & - 3 \end{matrix}\right) \]\n\nand from the above corollary, it has the same determinant as \( A \) . Now using the corollary some more, \( \det \left( B\right) = \left( \frac{-1}{3}\right) \det \left( C\right) \) where\n\n\[ C = \left( \begin{matrix} 1 & 2 & 3 & 4 \\ 0 & 0 & {11} & {22} \\ 0 & - 3 & - 8 & - {13} \\ 0 & 6 & {30} & 9 \end{matrix}\right) \]\n\nThe second row was replaced by \( \left( {-3}\right) \) times the third row added to the second row and then the last row was multiplied by \( \left( {-3}\right) \) . Now replace the last row with 2 times the third added to it and then switch the third and second rows. Then \( \det \left( C\right) = - \det \left( D\right) \) where\n\n\[ D = \left( \begin{matrix} 1 & 2 & 3 & 4 \\ 0 & - 3 & - 8 & - {13} \\ 0 & 0 & {11} & {22} \\ 0 & 0 & {14} & - {17} \end{matrix}\right) \]\n\nYou could do more row operations or you could note that this can be easily expanded along the first column followed by expanding the \( 3 \times 3 \) matrix which results along its first column.\n\nThus\n\[ \det \left( D\right) = 1\left( {-3}\right) \left| \begin{matrix} {11} & {22} \\ {14} & - {17} \end{matrix}\right| = {1485} \]\n\nand so \( \det \left( C\right) = - {1485} \) and \( \det \left( A\right) = \det \left( B\right) = \left( \frac{-1}{3}\right) \left( {-{1485}}\right) = {495} \) .
|
Yes
|
Find the inverse of the matrix\n\n\[ A = \left( \begin{array}{lll} 1 & 2 & 3 \\ 3 & 0 & 1 \\ 1 & 2 & 1 \end{array}\right) \]
|
First find the determinant of this matrix. This is seen to be 12 . The cofactor matrix of\n\n\( A \) is\n\[ \left( \begin{matrix} - 2 & - 2 & 6 \\ 4 & - 2 & 0 \\ 2 & 8 & - 6 \end{matrix}\right) \]\n\nEach entry of \( A \) was replaced by its cofactor. Therefore, from the above theorem, the inverse of \( A \) should equal\n\n\[ \frac{1}{12}{\left( \begin{matrix} - 2 & - 2 & 6 \\ 4 & - 2 & 0 \\ 2 & 8 & - 6 \end{matrix}\right) }^{T} = \left( \begin{matrix} - \frac{1}{6} & \frac{1}{3} & \frac{1}{6} \\ - \frac{1}{6} & - \frac{1}{6} & \frac{2}{3} \\ \frac{1}{2} & 0 & - \frac{1}{2} \end{matrix}\right) . \]
|
Yes
|
Suppose\n\n\[ A\left( t\right) = \left( \begin{matrix} {e}^{t} & 0 & 0 \\ 0 & \cos t & \sin t \\ 0 & - \sin t & \cos t \end{matrix}\right) \]\n\nFind \( A{\left( t\right) }^{-1} \) .
|
First note \( \det \left( {A\left( t\right) }\right) = {e}^{t} \) . A routine computation using the above theorem shows that this inverse is\n\n\[ \frac{1}{{e}^{t}}{\left( \begin{matrix} 1 & 0 & 0 \\ 0 & {e}^{t}\cos t & {e}^{t}\sin t \\ 0 & - {e}^{t}\sin t & {e}^{t}\cos t \end{matrix}\right) }^{T} = \left( \begin{matrix} {e}^{-t} & 0 & 0 \\ 0 & \cos t & - \sin t \\ 0 & \sin t & \cos t \end{matrix}\right) . \]
|
Yes
|
Lemma 3.3.1 There exists a unique function, \( {\operatorname{sgn}}_{n} \) which maps each ordered list of numbers from \( \{ 1,\cdots, n\} \) to one of the three numbers, \( 0,1 \), or -1 which also has the following properties.\n\n\[ \n{\operatorname{sgn}}_{n}\left( {1,\cdots, n}\right) = 1 \n\]\n\n(3.2)\n\n\[ \n{\operatorname{sgn}}_{n}\left( {{i}_{1},\cdots, p,\cdots, q,\cdots ,{i}_{n}}\right) = - {\operatorname{sgn}}_{n}\left( {{i}_{1},\cdots, q,\cdots, p,\cdots ,{i}_{n}}\right) \n\]\n\n(3.3)\n\nIn words, the second property states that if two of the numbers are switched, the value of the function is multiplied by -1 . Also, in the case where \( n > 1 \) and \( \left\{ {{i}_{1},\cdots ,{i}_{n}}\right\} = \{ 1,\cdots, n\} \) so that every number from \( \{ 1,\cdots, n\} \) appears in the ordered list, \( \left( {{i}_{1},\cdots ,{i}_{n}}\right) \) ,\n\n\[ \n{\operatorname{sgn}}_{n}\left( {{i}_{1},\cdots ,{i}_{\theta - 1}, n,{i}_{\theta + 1},\cdots ,{i}_{n}}\right) \equiv \n\]\n\n\[ \n{\left( -1\right) }^{n - \theta }{\operatorname{sgn}}_{n - 1}\left( {{i}_{1},\cdots ,{i}_{\theta - 1},{i}_{\theta + 1},\cdots ,{i}_{n}}\right) \n\]\n\n(3.4)\n\nwhere \( n = {i}_{\theta } \) in the ordered list, \( \left( {{i}_{1},\cdots ,{i}_{n}}\right) \) .
|
Proof: To begin with, it is necessary to show the existence of such a function. This is clearly true if \( n = 1 \) . Define \( {\operatorname{sgn}}_{1}\left( 1\right) \equiv 1 \) and observe that it works. No switching is possible. In the case where \( n = 2 \), it is also clearly true. Let \( {\operatorname{sgn}}_{2}\left( {1,2}\right) = 1 \) and \( {\operatorname{sgn}}_{2}\left( {2,1}\right) = - 1 \) while \( {\operatorname{sgn}}_{2}\left( {2,2}\right) = {\operatorname{sgn}}_{2}\left( {1,1}\right) = 0 \) and verify it works. Assuming such a function exists for \( n \) , \( {\operatorname{sgn}}_{n + 1} \) will be defined in terms of \( {\operatorname{sgn}}_{n} \) . If there are any repeated numbers in \( \left( {{i}_{1},\cdots ,{i}_{n + 1}}\right) \) , \( {\operatorname{sgn}}_{n + 1}\left( {{i}_{1},\cdots ,{i}_{n + 1}}\right) \equiv 0 \) . If there are no repeats, then \( n + 1 \) appears somewhere in the ordered list. Let \( \theta \) be the position of the number \( n + 1 \) in the list. Thus, the list is of the form \( \left( {{i}_{1},\cdots ,{i}_{\theta - 1}, n + 1,{i}_{\theta + 1},\cdots ,{i}_{n + 1}}\right) \) . From (3.4) it must be that\n\n\[ \n{\operatorname{sgn}}_{n + 1}\left( {{i}_{1},\cdots ,{i}_{\theta - 1}, n + 1,{i}_{\theta + 1},\cdots ,{i}_{n + 1}}\right) \equiv \n\]\n\n\[ \n{\left( -1\right) }^{n + 1 - \theta }{\operatorname{sgn}}_{n}\left( {{i}_{1},\cdots ,{i}_{\theta - 1},{i}_{\theta + 1},\cdots ,{i}_{n + 1}}\right) . \n\]\n\nIt is necessary to verify this satisfies (3.2) and (3.3) with \( n \) replaced with \( n + 1 \) . The first of these is obviously true because\n\n\[ \n{\operatorname{sgn}}_{n + 1}\left( {1,\cdots, n, n + 1}\right) \equiv {\left( -1\right) }^{n + 1 - \left( {n + 1}\right) }{\operatorname{sgn}}_{n}\left( {1,\cdots, n}\right) = 1. \n\]\n\nIf there are repeated numbers in \( \left( {{i}_{1},\cdots ,{i}_{n + 1}}\right) \), then it is obvious (3.3) holds because both sides would equal zero from the above definition. It remains to verify (3.3) in the case where there are no numbers repeated in \( \left( {{i}_{1},\cdots ,{i}_{n + 1}}\right) \) . Consider\n\n\[ \n{\operatorname{sgn}}_{n + 1}\left( {{i}_{1},\cdots ,\overset{r}{p},\cdots ,\overset{s}{q},\cdots ,{i}_{n + 1}}\right) , \n\]\n\nwhere the \( r \) above the \( p \) indicates the number \( p \) is in the \( {r}^{th} \) position and the \( s \) above the \( q \) indicates that the number, \( q \) is in the \( {s}^{\text{th }} \) position. Suppose first that \( r < \theta < s \) . Then\n\n\[ \n{\operatorname{sgn}}_{n + 1}\left( {{i}_{1},\cdots ,\overset{r}{p},\cdots ,\overset{\theta }{n + 1},\cdots ,\overset{s}{q},\cdots ,{i}_{n + 1}}\right) \equiv \n\]\n\n\[ \n{\left( -1\right) }^{n + 1 - \theta }{\operatorname{sgn}}_{n}\left( {{i}_{1},\cdots ,\overset{r}{p},\cdots ,\ove
|
Yes
|
Lemma 3.3.2 Every ordered list of \( \{ 1,2,\cdots, n\} \) can be obtained from every other ordered list by a finite number of switches. Also, sgn is unique.
|
Proof: This is obvious if \( n = 1 \) or 2 . Suppose then that it is true for sets of \( n - 1 \) elements. Take two ordered lists of numbers, \( {P}_{1},{P}_{2} \) . To get from \( {P}_{1} \) to \( {P}_{2} \) using switches, first make a switch to obtain the last element in the list coinciding with the last element of \( {P}_{2} \) . By induction, there are switches which will arrange the first \( n - 1 \) to the right order.\n\nTo see \( {\operatorname{sgn}}_{n} \) is unique, if there exist two functions, \( f \) and \( g \) both satisfying (3.2) and (3.3), you could start with \( f\left( {1,\cdots, n}\right) = g\left( {1,\cdots, n}\right) \) and applying the same sequence of switches, eventually arrive at \( f\left( {{i}_{1},\cdots ,{i}_{n}}\right) = g\left( {{i}_{1},\cdots ,{i}_{n}}\right) \) . If any numbers are repeated, then (3.3) gives both functions are equal to zero for that ordered list.
|
Yes
|
Proposition 3.3.6 Let \( \left( {{r}_{1},\cdots ,{r}_{n}}\right) \) be an ordered list of numbers from \( \{ 1,\cdots, n\} \) . Then\n\n\[ \operatorname{sgn}\left( {{r}_{1},\cdots ,{r}_{n}}\right) \det \left( A\right) = \mathop{\sum }\limits_{\left( {k}_{1},\cdots ,{k}_{n}\right) }\operatorname{sgn}\left( {{k}_{1},\cdots ,{k}_{n}}\right) {a}_{{r}_{1}{k}_{1}}\cdots {a}_{{r}_{n}{k}_{n}} \]\n\n(3.8)\n\n\[ = \det \left( {A\left( {{r}_{1},\cdots ,{r}_{n}}\right) }\right) . \]\n\n(3.9)
|
Proof: Let \( \left( {1,\cdots, n}\right) = \left( {1,\cdots, r,\cdots s,\cdots, n}\right) \) so \( r < s \) .\n\n\[ \det \left( {A\left( {1,\cdots, r,\cdots, s,\cdots, n}\right) }\right) = \]\n\n\( \left( {3.10}\right) \)\n\n\[ \mathop{\sum }\limits_{\left( {k}_{1},\cdots ,{k}_{n}\right) }\operatorname{sgn}\left( {{k}_{1},\cdots ,{k}_{r},\cdots ,{k}_{s},\cdots ,{k}_{n}}\right) {a}_{1{k}_{1}}\cdots {a}_{r{k}_{r}}\cdots {a}_{s{k}_{s}}\cdots {a}_{n{k}_{n}}, \]\n\nand renaming the variables, calling \( {k}_{s},{k}_{r} \) and \( {k}_{r},{k}_{s} \), this equals\n\n\[ = \mathop{\sum }\limits_{\left( {k}_{1},\cdots ,{k}_{n}\right) }\operatorname{sgn}\left( {{k}_{1},\cdots ,{k}_{s},\cdots ,{k}_{r},\cdots ,{k}_{n}}\right) {a}_{1{k}_{1}}\cdots {a}_{r{k}_{s}}\cdots {a}_{s{k}_{r}}\cdots {a}_{n{k}_{n}} \]\n\n\[ = \mathop{\sum }\limits_{\left( {k}_{1},\cdots ,{k}_{n}\right) } - \operatorname{sgn}\left( {{k}_{1},\cdots ,\overset{\text{These got switched }}{\overbrace{{k}_{r},\cdots ,{k}_{s}}},\cdots ,{k}_{n}}\right) {a}_{1{k}_{1}}\cdots {a}_{s{k}_{r}}\cdots {a}_{r{k}_{s}}\cdots {a}_{n{k}_{n}} \]\n\n\[ = - \det \left( {A\left( {1,\cdots, s,\cdots, r,\cdots, n}\right) }\right) . \]\n\n(3.11)\n\nConsequently,\n\n\[ \det \left( {A\left( {1,\cdots, s,\cdots, r,\cdots, n}\right) }\right) = - \det \left( {A\left( {1,\cdots, r,\cdots, s,\cdots, n}\right) }\right) = - \det \left( A\right) \]\n\nNow letting \( A\left( {1,\cdots, s,\cdots, r,\cdots, n}\right) \) play the role of \( A \), and continuing in this way, switching pairs of numbers,\n\n\[ \det \left( {A\left( {{r}_{1},\cdots ,{r}_{n}}\right) }\right) = {\left( -1\right) }^{p}\det \left( A\right) \]\n\nwhere it took \( p \) switches to obtain \( \left( {{r}_{1},\cdots ,{r}_{n}}\right) \) from \( \left( {1,\cdots, n}\right) \) . By Lemma 3.3.1, this implies\n\n\[ \det \left( {A\left( {{r}_{1},\cdots ,{r}_{n}}\right) }\right) = {\left( -1\right) }^{p}\det \left( A\right) = \operatorname{sgn}\left( {{r}_{1},\cdots ,{r}_{n}}\right) \det \left( A\right) \]\n\nand proves the proposition in the case when there are no repeated numbers in the ordered list, \( \left( {{r}_{1},\cdots ,{r}_{n}}\right) \) . However, if there is a repeat, say the \( {r}^{th} \) row equals the \( {s}^{th} \) row, then the reasoning of (3.10)-(3.11) shows that \( \det \left( {A\left( {{r}_{1},\cdots ,{r}_{n}}\right) }\right) = 0 \) and also \( \operatorname{sgn}\left( {{r}_{1},\cdots ,{r}_{n}}\right) = 0 \) so the formula holds in this case also.
|
Yes
|
The following formula for \( \det \left( A\right) \) is valid.
|
From Proposition 3.3.6, if the \( {r}_{i} \) are distinct,\n\n\[ \det \left( A\right) = \mathop{\sum }\limits_{\left( {k}_{1},\cdots ,{k}_{n}\right) }\operatorname{sgn}\left( {{r}_{1},\cdots ,{r}_{n}}\right) \operatorname{sgn}\left( {{k}_{1},\cdots ,{k}_{n}}\right) {a}_{{r}_{1}{k}_{1}}\cdots {a}_{{r}_{n}{k}_{n}}. \]\n\nSumming over all ordered lists, \( \left( {{r}_{1},\cdots ,{r}_{n}}\right) \) where the \( {r}_{i} \) are distinct,(If the \( {r}_{i} \) are not distinct, \( \operatorname{sgn}\left( {{r}_{1},\cdots ,{r}_{n}}\right) = 0 \) and so there is no contribution to the sum.)\n\n\[ n!\det \left( A\right) = \mathop{\sum }\limits_{\left( {r}_{1},\cdots ,{r}_{n}\right) }\mathop{\sum }\limits_{\left( {k}_{1},\cdots ,{k}_{n}\right) }\operatorname{sgn}\left( {{r}_{1},\cdots ,{r}_{n}}\right) \operatorname{sgn}\left( {{k}_{1},\cdots ,{k}_{n}}\right) {a}_{{r}_{1}{k}_{1}}\cdots {a}_{{r}_{n}{k}_{n}}. \]\n\nThis proves the corollary since the formula gives the same number for \( A \) as it does for \( {A}^{T} \) . \( β± \)
|
Yes
|
If two rows or two columns in an \( n \times n \) matrix \( A \), are switched, the determinant of the resulting matrix equals \( \left( {-1}\right) \) times the determinant of the original matrix. If \( A \) is an \( n \times n \) matrix in which two rows are equal or two columns are equal then \( \det \left( A\right) = 0 \) .
|
Proof: By Proposition 3.3.6 when two rows are switched, the determinant of the resulting matrix is \( \left( {-1}\right) \) times the determinant of the original matrix. By Corollary 3.3.8 the same holds for columns because the columns of the matrix equal the rows of the transposed matrix. Thus if \( {A}_{1} \) is the matrix obtained from \( A \) by switching two columns,\n\n\[ \det \left( A\right) = \det \left( {A}^{T}\right) = - \det \left( {A}_{1}^{T}\right) = - \det \left( {A}_{1}\right) . \]\n\nIf \( A \) has two equal columns or two equal rows, then switching them results in the same matrix. Therefore, \( \det \left( A\right) = - \det \left( A\right) \) and so \( \det \left( A\right) = 0 \) .
|
Yes
|
Corollary 3.3.11 Suppose \( A \) is an \( n \times n \) matrix and some column (row) is a linear combination of \( r \) other columns (rows). Then \( \det \left( A\right) = 0 \) .
|
Proof: Let \( A = \left( \begin{array}{lll} {\mathbf{a}}_{1} & \cdots & {\mathbf{a}}_{n} \end{array}\right) \) be the columns of \( A \) and suppose the condition that one column is a linear combination of \( r \) of the others is satisfied. Then by using Corollary 3.3. you may rearrange the columns to have the \( {n}^{th} \) column a linear combination of the first \( r \) columns. Thus \( {\mathbf{a}}_{n} = \mathop{\sum }\limits_{{k = 1}}^{r}{c}_{k}{\mathbf{a}}_{k} \) and so\n\n\[ \det \left( A\right) = \det \left( \begin{array}{llllll} {\mathbf{a}}_{1} & \cdots & {\mathbf{a}}_{r} & \cdots & {\mathbf{a}}_{n - 1} & \mathop{\sum }\limits_{{k = 1}}^{r}{c}_{k}{\mathbf{a}}_{k} \end{array}\right) . \]\n\nBy Corollary 3.3.9\n\n\[ \det \left( A\right) = \mathop{\sum }\limits_{{k = 1}}^{r}{c}_{k}\det \left( \begin{array}{llllll} {\mathbf{a}}_{1} & \cdots & {\mathbf{a}}_{r} & \cdots & {\mathbf{a}}_{n - 1} & {\mathbf{a}}_{k} \end{array}\right) = 0. \]\n\nThe case for rows follows from the fact that \( \det \left( A\right) = \det \left( {A}^{T}\right) \) .
|
Yes
|
Theorem 3.3.13 Let \( A \) and \( B \) be \( n \times n \) matrices. Then\n\n\[ \det \left( {AB}\right) = \det \left( A\right) \det \left( B\right) . \]
|
Proof: Let \( {c}_{ij} \) be the \( i{j}^{th} \) entry of \( {AB} \) . Then by Proposition 3.3.6,\n\n\[ \det \left( {AB}\right) = \mathop{\sum }\limits_{\left( {k}_{1},\cdots ,{k}_{n}\right) }\operatorname{sgn}\left( {{k}_{1},\cdots ,{k}_{n}}\right) {c}_{1{k}_{1}}\cdots {c}_{n{k}_{n}} \]\n\n\[ = \mathop{\sum }\limits_{\left( {k}_{1},\cdots ,{k}_{n}\right) }\operatorname{sgn}\left( {{k}_{1},\cdots ,{k}_{n}}\right) \left( {\mathop{\sum }\limits_{{r}_{1}}{a}_{1{r}_{1}}{b}_{{r}_{1}{k}_{1}}}\right) \cdots \left( {\mathop{\sum }\limits_{{r}_{n}}{a}_{n{r}_{n}}{b}_{{r}_{n}{k}_{n}}}\right) \]\n\n\[ = \mathop{\sum }\limits_{\left( {r}_{1}\cdots ,{r}_{n}\right) }\mathop{\sum }\limits_{\left( {k}_{1},\cdots ,{k}_{n}\right) }\operatorname{sgn}\left( {{k}_{1},\cdots ,{k}_{n}}\right) {b}_{{r}_{1}{k}_{1}}\cdots {b}_{{r}_{n}{k}_{n}}\left( {{a}_{1{r}_{1}}\cdots {a}_{n{r}_{n}}}\right) \]\n\n\[ = \mathop{\sum }\limits_{\left( {r}_{1}\cdots ,{r}_{n}\right) }\operatorname{sgn}\left( {{r}_{1}\cdots {r}_{n}}\right) {a}_{1{r}_{1}}\cdots {a}_{n{r}_{n}}\det \left( B\right) = \det \left( A\right) \det \left( B\right) .\blacksquare \]
|
Yes
|
Theorem 3.3.14 Let \( A \) be an \( n \times m \) matrix with \( n \geq m \) and let \( B \) be a \( m \times n \) matrix. Also let \( {A}_{i} \) be the \( m \times m \) submatrices of \( A \) which are obtained by deleting \( n - m \) rows and let \( {B}_{i} \) be the \( m \times m \) submatrices of \( B \) which are obtained by deleting corresponding \( n - m \) columns. Then \[ \det \left( {BA}\right) = \mathop{\sum }\limits_{{k = 1}}^{{C\left( {n, m}\right) }}\det \left( {B}_{k}\right) \det \left( {A}_{k}\right) \]
|
Proof: This follows from a computation. By Corollary 3.3.8 on Page SZ, \( \det \left( {BA}\right) = \)\[ \frac{1}{m!}\mathop{\sum }\limits_{\left( {i}_{1}\cdots {i}_{m}\right) }\mathop{\sum }\limits_{\left( {j}_{1}\cdots {j}_{m}\right) }\operatorname{sgn}\left( {{i}_{1}\cdots {i}_{m}}\right) \operatorname{sgn}\left( {{j}_{1}\cdots {j}_{m}}\right) {\left( BA\right) }_{{i}_{1}{j}_{1}}{\left( BA\right) }_{{i}_{2}{j}_{2}}\cdots {\left( BA\right) }_{{i}_{m}{j}_{m}} \]\[\frac{1}{m!}\mathop{\sum }\limits_{\left( {i}_{1}\cdots {i}_{m}\right) }\mathop{\sum }\limits_{\left( {j}_{1}\cdots {j}_{m}\right) }\operatorname{sgn}\left( {{i}_{1}\cdots {i}_{m}}\right) \operatorname{sgn}\left( {{j}_{1}\cdots {j}_{m}}\right) .\]\[\mathop{\sum }\limits_{{{r}_{1} = 1}}^{n}{B}_{{i}_{1}{r}_{1}}{A}_{{r}_{1}{j}_{1}}\mathop{\sum }\limits_{{{r}_{2} = 1}}^{n}{B}_{{i}_{2}{r}_{2}}{A}_{{r}_{2}{j}_{2}}\cdots \mathop{\sum }\limits_{{{r}_{m} = 1}}^{n}{B}_{{i}_{m}{r}_{m}}{A}_{{r}_{m}{j}_{m}}\]Now denote by \( {I}_{k} \) one of the \( r \) subsets of \( \{ 1,\cdots, n\} \) . Thus there are \( C\left( {n, m}\right) \) of these.\[ = \mathop{\sum }\limits_{{k = 1}}^{{C\left( {n, m}\right) }}\mathop{\sum }\limits_{{\left\{ {{r}_{1},\cdots ,{r}_{m}}\right\} = {I}_{k}}}\frac{1}{m!}\mathop{\sum }\limits_{\left( {i}_{1}\cdots {i}_{m}\right) }\mathop{\sum }\limits_{\left( {j}_{1}\cdots {j}_{m}\right) }\operatorname{sgn}\left( {{i}_{1}\cdots {i}_{m}}\right) \operatorname{sgn}\left( {{j}_{1}\cdots {j}_{m}}\right) .\]\[ {B}_{{i}_{1}{r}_{1}}{A}_{{r}_{1}{j}_{1}}{B}_{{i}_{2}{r}_{2}}{A}_{{r}_{2}{j}_{2}}\cdots {B}_{{i}_{m}{r}_{m}}{A}_{{r}_{m}{j}_{m}} \]\[\= \mathop{\sum }\limits_{{k = 1}}^{{C\left( {n, m}\right) }}\mathop{\sum }\limits_{{\left\{ {{r}_{1},\cdots ,{r}_{m}}\right\} = {I}_{k}}}\frac{1}{m!}\mathop{\sum }\limits_{\left( {i}_{1}\cdots {i}_{m}\right) }\operatorname{sgn}\left( {{i}_{1}\cdots {i}_{m}}\right) {B}_{{i}_{1}{r}_{1}}{B}_{{i}_{2}{r}_{2}}\cdots {B}_{{i}_{m}{r}_{m}}.\]\[\mathop{\sum }\limits_{\left( {j}_{1}\cdots {j}_{m}\right) }\operatorname{sgn}\left( {{j}_{1}\cdots {j}_{m}}\right) {A}_{{r}_{1}{j}_{1}}{A}_{{r}_{2}{j}_{2}}\cdots {A}_{{r}_{m}{j}_{m}} \]\[\= \mathop{\sum }\limits_{{k = 1}}^{{C\left( {n, m}\right) }}\mathop{\sum }\limits_{{\left\{ {{r}_{1},\cdots ,{r}_{m}}\right\} = {I}_{k}}}\frac{1}{m!}\operatorname{sgn}{\left( {r}_{1}\cdots {r}_{m}\right) }^{2}\det \left( {B}_{k}\right) \det \left( {A}_{k}\right) B \]\[\= \mathop{\sum }\limits_{{k = 1}}^{{C\left( {n, m}\right) }}\det \left( {B}_{k}\right) \det \left( {A}_{k}\right) \]since there are \( m \) ! ways of arranging the indices \( \left\{ {{r}_{1},\cdots ,{r}_{m}}\right\} \) .
|
Yes
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.