Q
stringlengths 4
3.96k
| A
stringlengths 1
3k
| Result
stringclasses 4
values |
|---|---|---|
Theorem 7.2.11 The Pigeonhole Principle. Let \( f \) be a function from a finite set \( X \) into a finite set \( Y \) . If \( n \geq 1 \) and \( \left| X\right| > n\left| Y\right| \), then there exists an element of \( Y \) that is the image under \( f \) of at least \( n + 1 \) elements of \( X \) .
|
Proof. Assume no such element exists. For each \( y \in Y \), let \( {A}_{y} = \{ x \in X \mid \) \( f\left( x\right) = y\} \) . Then it must be that \( \left| {A}_{y}\right| \leq n \) . Furthermore, the set of nonempty \( {A}_{y} \) form a partition of \( X \) . Therefore,\n\n\[\n\left| X\right| = \mathop{\sum }\limits_{{y \in Y}}\left| {A}_{y}\right| \leq n\left| Y\right|\n\]\n\nwhich is a contradiction.
|
Yes
|
A duplicate name is assured. Assume that a room contains four students with the first names John, James, and Mary. Prove that two students have the same first name.
|
We can visualize a mapping from the set of students to the set of first names; each student has a first name. The pigeonhole principle applies with \( n = 1 \), and we can conclude that at least two of the students have the same first name.
|
Yes
|
Example 7.3.3 A basic example. Let \( f : \{ 1,2,3\} \rightarrow \{ a, b\} \) be defined by \( f\left( 1\right) = a, f\left( 2\right) = a \), and \( f\left( 3\right) = b \) . Let \( g : \{ a, b\} \rightarrow \{ 5,6,7\} \) be defined by \( g\left( a\right) = 5 \) and \( g\left( b\right) = 7 \) . Then \( g \circ f : \{ 1,2,3\} \rightarrow \{ 5,6,7\} \) is defined by \( \left( {g \circ f}\right) \left( 1\right) = 5,\left( {g \circ f}\right) \left( 2\right) = 5 \), and \( \left( {g \circ f}\right) \left( 3\right) = 7 \) .
|
For example, \( \left( {g \circ f}\right) \left( 1\right) = \) \( g\left( {f\left( l\right) }\right) = g\left( a\right) = 5 \) . Note that \( f \circ g \) is not defined. Why?
|
Yes
|
Theorem 7.3.4 Function composition is associative. If \( f : A \rightarrow B \) , \( g : B \rightarrow C \), and \( h : C \rightarrow D \), then \( h \circ \left( {g \circ f}\right) = \left( {h \circ g}\right) \circ f \) .
|
Proof. Note: In order to prove that two functions are equal, we must use the definition of equality of functions. Assuming that the functions have the same domain, they are equal if, for each domain element, the images of that element under the two functions are equal.\n\nWe wish to prove that \( \left( {h \circ \left( {g \circ f}\right) }\right) \left( x\right) = \left( {\left( {h \circ g}\right) \circ f}\right) \left( x\right) \) for all \( x \in A \), which is the domain of both functions.\n\n\[ \left( {h \circ \left( {g \circ f}\right) }\right) \left( x\right) = h\left( {\left( {g \circ f}\right) \left( x\right) }\right) \text{by the definition of composition} \]\n\n\[ = h\left( {g\left( {f\left( x\right) }\right) }\right) \text{by the definition of composition} \]\n\nSimilarly,\n\n\[ \left( {\left( {h \circ g}\right) \circ f}\right) \left( x\right) = \left( {h \circ g}\right) \left( {f\left( x\right) }\right) \text{by the definition of composition} \]\n\n\[ = h\left( {g\left( {f\left( x\right) }\right) }\right) \text{by the definition of composition} \]\n\nNotice that no matter how the functions the expression \( h \circ g \circ f \) is grouped, the final image of any element of \( x \in A \) is \( h\left( {g\left( {f\left( x\right) }\right) }\right) \) and so \( h \circ \left( {g \circ f}\right) = \left( {h \circ g}\right) \circ f \) .
|
Yes
|
The inverse of a function on \( \{ 1,2,3\} \) . Let \( A = \{ 1,2,3\} \) and let \( f \) be the function defined on \( A \) such that \( f\left( 1\right) = 2, f\left( 2\right) = 3 \), and \( f\left( 3\right) = 1 \) .
|
Then \( {f}^{-1} : A \rightarrow A \) is defined by \( {f}^{-1}\left( 1\right) = 3,{f}^{-1}\left( 2\right) = 1 \), and \( {f}^{-1}\left( 3\right) = 2 \) .
|
Yes
|
Theorem 7.3.14 Bijections have inverses. Let \( f : A \rightarrow A.{f}^{-1} \) exists if and only if \( f \) is a bijection; i. e. \( f \) is one-to-one and onto.
|
Proof. \( \left( \Rightarrow \right) \) In this half of the proof, assume that \( {f}^{-1} \) exists and we must prove that \( f \) is one-to-one and onto. To do so, it is convenient for us to use the relation notation, where \( f\left( s\right) = t \) is equivalent to \( \left( {s, t}\right) \in f \) . To prove that \( f \) is one-to-one, assume that \( f\left( a\right) = f\left( b\right) = c \) . Alternatively, that means \( \left( {a, c}\right) \) and \( \left( {b, c}\right) \) are elements of \( f \) . We must show that \( a = b \) . Since \( \left( {a, b}\right) ,\left( {c, b}\right) \in f \) , \( \left( {c, a}\right) \) and \( \left( {c, b}\right) \) are in \( {f}^{-1} \) . By the fact that \( {f}^{-1} \) is a function and \( c \) cannot have two images, \( a \) and \( b \) must be equal, so \( f \) is one-to-one.\n\nNext, to prove that \( f \) is onto, observe that for \( {f}^{-1} \) to be a function, it must use all of its domain, namely A. Let \( b \) be any element of \( A \) . Then \( \mathrm{b} \) has an image under \( {f}^{-1},{f}^{-1}\left( b\right) \) . Another way of writing this is \( \left( {b,{f}^{-1}\left( b\right) }\right) \in {f}^{-1} \) , By the definition of the inverse, this is equivalent to \( \left( {{f}^{-1}\left( b\right), b}\right) \in f \) . Hence, \( b \) is in the range of \( f \) . Since \( b \) was chosen arbitrarily, this shows that the range of \( f \) must be all of \( A \).\n\n\( \left( \Leftarrow \right) \) Assume \( f \) is one-to-one and onto and we are to prove \( {f}^{-1} \) exists. We leave this half of the proof to the reader. \( ▱ \)
|
No
|
Example 7.3.18 Another inverse. Let \( A = \{ 1,2,3\} \) and \( B = \{ a, b, c\} \) . Define \( f : A \rightarrow B \) by \( f\left( 1\right) = a, f\left( 2\right) = b \), and \( f\left( 3\right) = c \) . Then \( g : B \rightarrow A \) defined by \( g\left( a\right) = 1, g\left( b\right) = 2 \), and \( g\left( c\right) = 3 \) is the inverse of \( f \) .
|
\[ \left. \begin{array}{l} \left( {g \circ f}\right) \left( 1\right) = 1 \\ \left( {g \circ f}\right) \left( 2\right) = 2 \\ \left( {g \circ f}\right) \left( 3\right) = 3 \end{array}\right\} \Rightarrow g \circ f = {i}_{A}\text{ and }\left. \begin{array}{l} \left( {f \circ g}\right) \left( a\right) = a \\ \left( {f \circ g}\right) \left( b\right) = b \\ \left( {f \circ g}\right) \left( c\right) = c \end{array}\right\} \Rightarrow f \circ g = {i}_{B} \]
|
Yes
|
Define the sequence of numbers \( B \) by\n\n\[ \n{B}_{0} = {100}\text{and} \n\]\n\n\[ \n{B}_{k} = {1.08}{B}_{k - 1}\text{ for }k \geq 1. \n\]
|
These rules stipulate that each number in the list is 1.08 times the previous number, with the starting number equal to 100 . For example\n\n\[ \n{B}_{3} = {1.08}{B}_{2} \n\]\n\n\[ \n= {1.08}\left( {{1.08}{B}_{1}}\right) \n\]\n\n\[ \n= {1.08}\left( {{1.08}\left( {{1.08}{B}_{0}}\right) }\right) \n\]\n\n\[ \n= {1.08}\left( {{1.08}\left( {{1.08} \cdot {100}}\right) }\right) \n\]\n\n\[ \n= {1.08}^{3} \cdot {100} = {125.971} \n\]
|
Yes
|
The Fibonacci sequence is the sequence \( F \) defined by\n\n\[ \n{F}_{0} = 1,{F}_{1} = 1\text{and} \n\]\n\n\[ \n{F}_{k} = {F}_{k - 2} + {F}_{k - 1}\text{ for }k \geq 2 \n\]
|
To determine, for example, the fourth item in the Fibonacci sequence we repeatedly apply the recursive rule for \( F \) until we are left with an expression involving \( {F}_{0} \) and \( {F}_{1} \) :\n\n\[ \n{F}_{4} = {F}_{2} + {F}_{3} \n\]\n\n\[ \n= \left( {{F}_{0} + {F}_{1}}\right) + \left( {{F}_{1} + {F}_{2}}\right) \n\]\n\n\[ \n= \left( {{F}_{0} + {F}_{1}}\right) + \left( {{F}_{1} + \left( {{F}_{0} + {F}_{1}}\right) }\right) \n\]\n\n\[ \n= \left( {1 + 1}\right) + \left( {1 + \left( {1 + 1}\right) }\right) \n\]\n\n\[ \n= 5 \n\]
|
Yes
|
A formula for the sequence \( B \) in Example 8.1.7 is \( B = {100}{\left( {1.08}\right) }^{k} \) for \( k \geq 0 \).
|
If \( k = 0 \), then \( B = {100}{\left( {1.08}\right) }^{0} = {100} \), as defined. Now assume that for some \( k \geq 1 \), the formula for \( {B}_{k} \) is true.\n\n\[ \n{B}_{k + 1} = {1.08}{B}_{k}\text{by the recursive definition} \]\n\n\[ \n= {1.08}\left( {{100}{\left( {1.08}\right) }^{k}}\right) \text{by the induction hypothesis} \]\n\n\[ \n= {100}{\left( {1.08}\right) }^{k + 1} \]\n\nhence the formula is true for \( k + 1 \)
|
Yes
|
Example 8.3.8 First Order Homogeneous Recurrence Relations. \( D\left( k\right) - {2D}\left( {k - 1}\right) = 0 \) is a first-order homogeneous relation.
|
Since it can also be written as \( D\left( k\right) = {2D}\left( {k - 1}\right) \), it should be no surprise that it arose from an expression that involves powers of 2 . More generally, you would expect that the solution of \( L\left( k\right) - {aL}\left( {k - 1}\right) \) would involve \( {a}^{k} \) . Actually, the solution is \( L\left( k\right) = L\left( 0\right) {a}^{k} \), where the value of \( L\left( 0\right) \) is given by the initial condition.
|
Yes
|
Consider the second-order homogeneous relation \( S\left( k\right) - {7S}\left( {k - 1}\right) + {12S}\left( {k - 2}\right) = 0 \) together with the initial conditions \( S\left( 0\right) = 4 \) and \( S\left( 1\right) = 4 \).
|
From our discussion above, we can predict that the solution to this relation involves terms of the form \( b{a}^{k} \), where \( b \) and \( a \) are nonzero constants that must be determined. If the solution were to equal this quantity exactly, then\n\n\[ S\left( k\right) = b{a}^{k} \]\n\n\[ S\left( {k - 1}\right) = b{a}^{k - 1} \]\n\n\[ S\left( {k - 2}\right) = b{a}^{k - 2} \]\n\nSubstitute these expressions into the recurrence relation to get\n\n\[ b{a}^{k} - {7b}{a}^{k - 1} + {12b}{a}^{k - 2} = 0 \]\n\nEach term on the left-hand side of this equation has a factor of \( b{a}^{k - 2} \), which is nonzero. Dividing through by this common factor yields\n\n\[ {a}^{2} - {7a} + {12} = \left( {a - 3}\right) \left( {a - 4}\right) = 0 \]\n\n(8.3.1)\n\nTherefore, the only possible values of \( a \) are 3 and 4 . Equation (8.3.1) is called the characteristic equation of the recurrence relation. The fact is that our original recurrence relation is true for any sequence of the form \( S\left( k\right) = \) \( {b}_{1}{3}^{k} + {b}_{2}{4}^{k} \), where \( {b}_{1} \) and \( {b}_{2} \) are real numbers. This set of sequences is called the general solution of the recurrence relation. If we didn't have initial conditions for \( S \), we would stop here. The initial conditions make it possible for us to find definite values for \( {b}_{1} \) and \( {b}_{2} \).\n\n\[ \left\{ \begin{array}{l} S\left( 0\right) = 4 \\ S\left( 1\right) = 4 \end{array}\right\} \Rightarrow \left\{ \begin{array}{l} {b}_{1}{3}^{0} + {b}_{2}{4}^{0} = 4 \\ {b}_{1}{3}^{1} + {b}_{2}{4}^{1} = 4 \end{array}\right\} \Rightarrow \left\{ \begin{matrix} {b}_{1} + {b}_{2} = 4 \\ 3{b}_{1} + 4{b}_{2} = 4 \end{matrix}\right\} \]\n\nThe solution of this set of simultaneous equations is \( {b}_{1} = {12} \) and \( {b}_{2} = - 8 \) and so the solution is \( S\left( k\right) = {12} \cdot {3}^{k} - 8 \cdot {4}^{k} \).
|
Yes
|
Suppose that \( T \) is defined by \( T\left( k\right) = {7T}\left( {k - 1}\right) - {10T}\left( {k - 2}\right) \), with \( T\left( 0\right) = 4 \) and \( T\left( 1\right) = {17} \).
|
(a) Note that we have written the recurrence relation in \
|
No
|
Solve \( S\left( k\right) - {7S}\left( {k - 2}\right) + {6S}\left( {k - 3}\right) = 0 \), where \( S\left( 0\right) = 8, S\left( 1\right) = 6 \), and \( S\left( 2\right) = {22} \).
|
(a) The characteristic equation is \( {a}^{3} - {7a} + 6 = 0 \).\n\n(b) The only rational roots that we can attempt are \( \pm 1, \pm 2, \pm 3 \), and \( \pm 6 \). By checking these, we obtain the three roots 1, 2, and -3 .\n\n(c) The general solution is \( S\left( k\right) = {b}_{1}{1}^{k} + {b}_{2}{2}^{k} + {b}_{3}{\left( -3\right) }^{k} \). The first term can simply be written \( {b}_{1} \).\n\n(d) \( \left\{ \begin{matrix} S\left( 0\right) = 8 \\ S\left( 1\right) = 6 \\ S\left( 2\right) = {22} \end{matrix}\right\} \Rightarrow \left\{ \begin{matrix} {b}_{1} + {b}_{2} + {b}_{3} = 8 \\ {b}_{1} + 2{b}_{2} - 3{b}_{3} = 6 \\ {b}_{1} + 4{b}_{2} + 9{b}_{3} = {22} \end{matrix}\right\} \) You can solve this system by elimination to obtain \( {b}_{1} = 5,{b}_{2} = 2 \), and \( {b}_{3} = 1 \). Therefore,\n\n\( S\left( k\right) = 5 + 2 \cdot {2}^{k} + {\left( -3\right) }^{k} = 5 + {2}^{k + 1} + {\left( -3\right) }^{k} \)
|
Yes
|
Solve \( D\left( k\right) - {8D}\left( {k - 1}\right) + {16D}\left( {k - 2}\right) = 0 \), where \( D\left( 2\right) = {16} \) and \( D\left( 3\right) = {80} \).
|
(a) Characteristic equation: \( {a}^{2} - {8a} + {16} = 0 \).\n\n(b) \( {a}^{2} - {8a} + {16} = {\left( a - 4\right) }^{2} \). Therefore, there is a double characteristic root,\n\n(c) General solution: \( D\left( k\right) = \left( {{b}_{10} + {b}_{11}k}\right) {4}^{k} \).\n\n(d) \( \left\{ \begin{array}{l} D\left( 2\right) = {16} \\ D\left( 3\right) = {80} \end{array}\right\} \Rightarrow \left\{ \begin{array}{l} \left( {{b}_{10} + {b}_{11}2}\right) {4}^{2} = {16} \\ \left( {{b}_{10} + {b}_{11}3}\right) {4}^{3} = {80} \end{array}\right\} \n\n\( \Rightarrow \left\{ \begin{matrix} {16}{b}_{10} + {32}{b}_{11} = {16} \\ {64}{b}_{10} + {192}{b}_{11} = {80} \end{matrix}\right\} \Rightarrow \left\{ \begin{array}{l} {b}_{10} = \frac{1}{2} \\ {b}_{11} = \frac{1}{4} \end{array}\right\} \)\n\nTherefore \( D\left( k\right) = \left( {1/2 + \left( {1/4}\right) k}\right) {4}^{k} = \left( {2 + k}\right) {4}^{k - 1} \).
|
Yes
|
Solve \( S\left( k\right) + {5S}\left( {k - 1}\right) = 9 \), with \( S\left( 0\right) = 6 \) .
|
(a) The associated homogeneous relation, \( S\left( k\right) + {5S}\left( {k - 1}\right) = 0 \) has the characteristic equation \( a + 5 = 0 \) ; therefore, \( a = - 5 \) . The homogeneous solution is \( {S}^{\left( h\right) }\left( k\right) = b{\left( -5\right) }^{k} \).\n\n(b) Since the right-hand side is a constant, we guess that the particular solution will be a constant, \( d \).\n\n(c) If we substitute \( {S}^{\left( p\right) }\left( k\right) = d \) into the recurrence relation, we get \( d + {5d} = 9 \) , or \( {6d} = 9 \) . Therefore, \( {S}^{\left( p\right) }\left( k\right) = {1.5} \).\n\n(d) The general solution of the recurrence relation is \( \;S\left( k\right) = {S}^{\left( h\right) }\left( k\right) + \) \( {S}^{\left( p\right) }\left( k\right) = b{\left( -5\right) }^{k} + {1.5} \) The initial condition will give us one equation to solve in order to determine \( b.\;S\left( 0\right) = 6 \Rightarrow b{\left( -5\right) }^{0} + {1.5} = 6 \Rightarrow \) \( b + {1.5} = 6 \) Therefore, \( b = {4.5} \) and \( S\left( k\right) = {4.5}{\left( -5\right) }^{k} + {1.5} \).
|
Yes
|
Consider \( T\left( k\right) - {7T}\left( {k - 1}\right) + {10T}\left( {k - 2}\right) = 6 + {8k} \) with \( T\left( 0\right) = 1 \) and \( T\left( 1\right) = 2 \) .
|
(a) From Example 8.3.13, we know that \( {T}^{\left( h\right) }\left( k\right) = {b}_{1}{2}^{k} + {b}_{2}{5}^{k} \) . Caution:Don’t apply the initial conditions to \( {T}^{\left( h\right) } \) until you add \( {T}^{\left( p\right) } \) !\n\n(b) Since the right-hand side is a linear polynomial, \( {T}^{\left( p\right) } \) is linear; that is, \( {T}^{\left( p\right) }\left( k\right) = {d}_{0} + {d}_{1}k \) .\n\n(c) Substitution into the recurrence relation yields: \( \left( {{d}_{0} + {d}_{1}k}\right) - 7\left( {{d}_{0} + {d}_{1}\left( {k - 1}\right) }\right) + \) \( {10}\left( {{d}_{0} + {d}_{1}\left( {k - 2}\right) }\right) = 6 + {8k}\; \Rightarrow \left( {4{d}_{0} - {13}{d}_{1}}\right) + \left( {4{d}_{1}}\right) k = 6 + {8k} \) Two polynomials are equal only if their coefficients are equal. Therefore,\n\n\[ \left\{ \begin{matrix} 4{d}_{0} - {13}{d}_{1} = 6 \\ 4{d}_{1} = 8 \end{matrix}\right\} \Rightarrow \left\{ \begin{array}{l} {d}_{0} = 8 \\ {d}_{1} = 2 \end{array}\right\} \]\n\n(d) Use the general solution \( T\left( k\right) = {b}_{1}{2}^{k} + {b}_{2}{5}^{k} + 8 + {2k} \) and the initial condi-\n\nconditions to get a final solution: \( \;\left\{ \begin{array}{l} T\left( 0\right) = 1 \\ T\left( 1\right) = 2 \end{array}\right\} \Rightarrow \left\{ \begin{matrix} {b}_{1} + {b}_{2} + 8 = 1 \\ 2{b}_{1} + 5{b}_{2} + {10} = 2 \end{matrix}\right\} \)\n\n\[ \Rightarrow \left\{ \begin{matrix} {b}_{1} + {b}_{2} = - 7 \\ 2{b}_{1} + 5{b}_{2} = - 8 \end{matrix}\right\} \]\n\n\[ \Rightarrow \left\{ \begin{matrix} {b}_{1} = - 9 \\ {b}_{2} = 2 \end{matrix}\right\} \]\n\nTherefore, \( T\left( k\right) = - 9 \cdot {2}^{k} + 2 \cdot {5}^{k} + 8 + {2k} \) .
|
Yes
|
Suppose you open a savings account that pays an annual interest rate of \( 8\% \) . In addition, suppose you decide to deposit one dollar when you open the account, and you intend to double your deposit each year. Let \( B\left( k\right) \) be your balance after \( k \) years. \( B \) can be described by the relation \( B\left( k\right) = {1.08B}\left( {k - 1}\right) + {2}^{k} \), with \( S\left( 0\right) = 1 \) .
|
Returning to the original situation, (a) \( {B}^{\left( h\right) }\left( k\right) = {b}_{1}{\left( {1.08}\right) }^{k} \) (b) \( {B}^{\left( p\right) }\left( k\right) \) should be of the form \( d{2}^{k} \) . (c) \[ d{2}^{k} = {1.08d}{2}^{k - 1} + {2}^{k} \Rightarrow \left( {2d}\right) {2}^{k - 1} = {1.08d}{2}^{k - 1} + 2 \cdot {2}^{k - 1} \] \[ \Rightarrow {2d} = {1.08d} + 2 \] \[ \Rightarrow {.92d} = 2 \] \[ \Rightarrow d = {2.174}\text{to the nearest thousandth)} \] Therefore \( {B}^{\left( p\right) }\left( k\right) = {2.174} \cdot {2}^{k} \) . (d) \( B\left( 0\right) = 1 \Rightarrow {b}_{1} + {2.174} = 1 \) \( \Rightarrow {b}_{1} = - {1.174} \) Therefore, \( B\left( k\right) = - {1.174} \cdot {1.08}^{k} + {2.174} \cdot {2}^{k} \) .
|
Yes
|
Find the general solution to \( S\left( k\right) - 3S\left( k - 1\right) - 4S\left( k - 2\right) = 4^{k} \)
|
(a) The characteristic roots of the associated homogeneous relation are -1 and 4. Therefore, \( S^{\left( h\right) }\left( k\right) = b_{1}\left( -1\right)^{k} + b_{2}4^{k} \).\n\n(b) A function of the form \( d4^{k} \) will not be a particular solution of the nonhomogeneous relation since it solves the associated homogeneous relation. When the right-hand side involves an exponential function with a base that equals a characteristic root, you should multiply your guess at a particular solution by \( k \). Our guess at \( S^{\left( p\right) }\left( k\right) \) would then be \( dk4^{k} \). See Observation 8.3.23 for a more complete description of this rule.\n\n(c) Substitute \( dk4^{k} \) into the recurrence relation for \( S\left( k\right) \):\n\n\[ dk4^{k} - 3d\left( k - 1\right) 4^{k - 1} - 4d\left( k - 2\right) 4^{k - 2} = 4^{k} \]\n\n\[ 16dk4^{k - 2} - 12d\left( k - 1\right) 4^{k - 2} - 4d\left( k - 2\right) 4^{k - 2} = 4^{k} \]\n\nEach term on the left-hand side has a factor of \( 4^{k - 2} \)\n\n\[ 16dk - 12d\left( k - 1\right) - 4d\left( k - 2\right) = 4^{2}20d = 16 \Rightarrow d = 0.8 \]\n\nTherefore, \( S^{\left( p\right) }\left( k\right) = 0.8k4^{k} \).\n\n(d) The general solution to the recurrence relation is\n\n\[ S\left( k\right) = b_{1}\left( -1\right)^{k} + b_{2}4^{k} + 0.8k4^{k} \]
|
Yes
|
Theorem 8.4.3 Fundamental Properties of Logarithms. Let \( a \) and \( b \) be positive real numbers, and \( r \) a real number.
|
\[ {\log }_{2}1 = 0 \] (8.4.4) \[ {\log }_{2}{ab} = {\log }_{2}a + {\log }_{2}b \] (8.4.5) \[ {\log }_{2}\frac{a}{b} = {\log }_{2}a - {\log }_{2}b \] (8.4.6) \[ {\log }_{2}{a}^{r} = r{\log }_{2}a \] (8.4.7) \[ {2}^{{\log }_{2}a} = a \] (8.4.8)
|
Yes
|
Theorem 8.4.5 How logarithms with different bases are related. Let \( b > 0, b \neq 1 \) . Then for all \( a > 0,{\log }_{b}a = \frac{{\log }_{2}a}{{\log }_{2}b} \) . Therefore, if \( b > 1 \), base \( b \) logarithms can be computed from base 2 logarithms by dividing by the positive scaling factor \( {\log }_{2}b \) . If \( b < 1 \), this scaling factor is negative.
|
Proof. By an analogue of (8.4.8), \( a = {b}^{{\log }_{b}a} \) . Therefore, if we take the base 2 logarithm of both sides of this equality we get:\n\n\[{\log }_{2}a = {\log }_{2}\left( {b}^{{\log }_{b}a}\right) \Rightarrow {\log }_{2}a = {\log }_{b}a \cdot {\log }_{2}b\]\n\nFinally, divide both sides of the last equation by \( {\log }_{2}b \) .
|
Yes
|
(a) If \( {S}_{n} = {3}^{n}, n \geq 0 \), then\n\n\[ G\left( {S;z}\right) = 1 + {3z} + 9{z}^{2} + {27}{z}^{3} + \cdots \]\n\n\[ = \mathop{\sum }\limits_{{n = 0}}^{\infty }{3}^{n}{z}^{n} \]\n\n\[ = \mathop{\sum }\limits_{{n = 0}}^{\infty }{\left( 3z\right) }^{n} \]
|
We can get a closed form expression for \( G\left( {S;z}\right) \) by observing that \( G\left( {S;z}\right) - \) \( {3zG}\left( {S;z}\right) = 1 \) . Therefore, \( G\left( {S;z}\right) = \frac{1}{1 - {3z}} \).
|
Yes
|
Example 8.5.8 Some operations on generating functions. If \( D\left( z\right) = \) \( \mathop{\sum }\limits_{{k = 0}}^{\infty }k{z}^{k} \) and \( H\left( z\right) = \mathop{\sum }\limits_{{k = 0}}^{\infty }{2}^{k}{z}^{k} \) then
|
\[ \left( {D + H}\right) \left( z\right) = \mathop{\sum }\limits_{{k = 0}}^{\infty }\left( {k + {2}^{k}}\right) {z}^{k} \] \[ \left( {2H}\right) \left( z\right) = \mathop{\sum }\limits_{{k = 0}}^{\infty }2 \cdot {2}^{k}{z}^{k} = \mathop{\sum }\limits_{{k = 0}}^{\infty }{2}^{k + 1}{z}^{k} \] \[ \left( {zD}\right) \left( z\right) = z\mathop{\sum }\limits_{{k = 0}}^{\infty }k{z}^{k} = \mathop{\sum }\limits_{{k = 0}}^{\infty }k{z}^{k + 1} \] \[ = \mathop{\sum }\limits_{{k = 1}}^{\infty }\left( {k - 1}\right) {z}^{k} = D\left( z\right) - \mathop{\sum }\limits_{{k = 1}}^{\infty }{z}^{k} \] \[ \left( {DH}\right) \left( z\right) = \mathop{\sum }\limits_{{k = 0}}^{\infty }\left( {\mathop{\sum }\limits_{{j = 0}}^{k}j{2}^{k - j}}\right) {z}^{k} \] \[ \left( {HH}\right) \left( z\right) = \mathop{\sum }\limits_{{k = 0}}^{\infty }\left( {\mathop{\sum }\limits_{{j = 0}}^{k}{2}^{j}{2}^{k - j}}\right) {z}^{k} = \mathop{\sum }\limits_{{k = 0}}^{\infty }\left( {k + 1}\right) {2}^{k}{z}^{k} \]
|
Yes
|
Theorem 8.5.9 Generating functions related to Pop and Push. If \( p > 1 \)\n\n(a) \( G\left( {S \uparrow p;z}\right) = \left( {G\left( {S;z}\right) - \mathop{\sum }\limits_{{k = 0}}^{{p - 1}}S\left( k\right) {z}^{k}}\right) /{z}^{k} \)
|
Proof. We prove (a) by induction and leave the proof of (b) to the reader.\n\nBasis:\n\n\[ G\left( {S \uparrow ;z}\right) = \mathop{\sum }\limits_{{k = 0}}^{\infty }S\left( {k + 1}\right) {z}^{k} \]\n\n\[ = \mathop{\sum }\limits_{{k = 1}}^{\infty }S\left( k\right) {z}^{k - 1} \]\n\n\[ = \left( {\mathop{\sum }\limits_{{k = 1}}^{\infty }S\left( k\right) {z}^{k}}\right) /z \]\n\n\[ = \left( {S\left( 0\right) + \mathop{\sum }\limits_{{k = 1}}^{\infty }S\left( k\right) {z}^{k} - S\left( 0\right) }\right) /z \]\n\n\[ = \left( {G\left( {S;z}\right) - S\left( 0\right) }\right) /z \]\n\nTherefore, part (a) is true for \( p = 1 \).\n\nInduction: Suppose that for some \( p \geq 1 \), the statement in part (a) is true:\n\n\[ G\left( {S \uparrow \left( {p + 1}\right) ;z}\right) = G\left( {\left( {S \uparrow p}\right) \uparrow ;z}\right) \]\n\n\[ = \left( {G\left( {S \uparrow p;z}\right) - \left( {S \uparrow p}\right) \left( 0\right) }\right) /z\text{by the basis} \]\n\n\[ = \frac{\frac{\left( G\left( S;z\right) - \mathop{\sum }\limits_{{k = 0}}^{{p - 1}}S\left( k\right) {z}^{k}\right) }{{z}^{p}} - S\left( p\right) }{z} \]\n\nby the induction hypothesis. Now write \( S\left( p\right) \) in the last expression above as \( \left( {S\left( p\right) {z}^{p}}\right) /{z}^{p} \) so that it fits into the finite summation:\n\n\[ G\left( {S \uparrow \left( {p + 1}\right) ;z}\right) = \left( \frac{G\left( {S;z}\right) - \mathop{\sum }\limits_{{k = 0}}^{p}S\left( k\right) {z}^{k}}{{z}^{p}}\right) /z \]\n\n\[ = \left( {G\left( {S;z}\right) - \mathop{\sum }\limits_{{k = 0}}^{p}S\left( k\right) {z}^{k}}\right) /{z}^{p + 1} \]\n\nTherefore the statement is true for \( p + 1 \).
|
No
|
Solve \( S\left( k\right) + {3S}\left( {k - 1}\right) - \) \( {4S}\left( {k - 2}\right) = 0, k \geq 2 \), with \( S\left( 0\right) = 3 \) and \( S\left( 1\right) = - 2 \).
|
(1) Translate to an equation about generating functions. First, we change the index of the recurrence relation by substituting \( n + 2 \) for \( k \) . The result is \( S\left( {n + 2}\right) + {3S}\left( {n + 1}\right) - {4S}\left( n\right) = 0, n \geq 0 \) . Now, if \( V\left( n\right) = \) \( S\left( {n + 2}\right) + {3S}\left( {n + 1}\right) - {4S}\left( n\right) \), then \( V \) is the zero sequence, which has a zero generating function. Furthermore, \( V = S \uparrow 2 + 3\left( {S \uparrow }\right) - {4S} \) . Therefore,\n\n\[ 0 = G\left( {V;z}\right) \]\n\n\[ = G\left( {S \uparrow 2;z}\right) + {3G}\left( {S \uparrow ;z}\right) - {4G}\left( {S;z}\right) \]\n\n\[ = \frac{G\left( {S;z}\right) - S\left( 0\right) - S\left( 1\right) z}{{z}^{2}} + 4\frac{\left( G\left( S;z\right) - S\left( 0\right) \right) }{z} - {4G}\left( {S;z}\right) \]\n\n(2) We want to now solve the following equation for \( G\left( {S;z}\right) \) :\n\n\[ \frac{G\left( {S;z}\right) - S\left( 0\right) - S\left( 1\right) z}{{z}^{2}} + 4\frac{\left( G\left( S;z\right) - S\left( 0\right) \right) }{z} - {4G}\left( {S;z}\right) = 0 \]\n\nMultiply by \( {z}^{2} \) :\n\n\[ G\left( {S;z}\right) - 3 + {2z} + {3z}\left( {G\left( {S;z}\right) - 3}\right) - 4{z}^{2}G\left( {S;z}\right) = 0 \]\n\nExpand and collect all terms involving \( G\left( {S;z}\right) \) on one side of the equation:\n\n\[ G\left( {S;z}\right) + {3zG}\left( {S;z}\right) - 4{z}^{2}G\left( {S;z}\right) = 3 + {7z} \]\n\n\[ \left( {1 + {3z} - 4{z}^{2}}\right) G\left( {S;z}\right) = 3 + {7z} \]\n\nTherefore,\n\n\[ G\left( {S;z}\right) = \frac{3 + {7z}}{1 + {3z} - 4{z}^{2}} \]\n\n(3) Determine \( \mathrm{S} \) from its generating function. \( 1 + {3z} - 4{z}^{2} = \left( {1 + {4z}}\right) \left( {1 - z}\right) \) thus a partial fraction decomposition of \( G\left( {S;z}\right) \) would be:\n\n\[ \frac{A}{1 + {4z}} + \frac{B}{1 - z} = \frac{{Az} - A - {4Bz} - B}{\left( {z - 1}\right) \left( {{4z} + 1}\right) } = \frac{\left( {A + B}\right) + \left( {{4B} - A}\right) z}{\left( {z - 1}\right) \left( {{4z} + 1}\right) } \]\n\nTherefore, \( A + B = 3 \) and \( {4B} - A = 7 \) . The solution of this set of\nequations is \( A = 1 \) and \( B = 2.G\left( {S;z}\right) = \frac{1}{1 + {4z}} + \frac{2}{1 - z} \) .\n\n\( \frac{1}{1 + {4z}} \) is the generating function of \( {S}_{1}\left( n\right) = {\left( -4\right) }^{n} \), and\n\n\( \frac{2}{1 - z} \) is the generating function of \( {S}_{2}\left( n\right) = 2{\left( 1\right) }^{n} = 2 \)\n\nIn conclusion, since \( G\left( {S;z}\right) = G\left( {{S}_{1};z}\right) + G\left( {{S}_{2};z}\right), S\left( n\right) = 2 + {\left( -4\right) }^{n} \) .
|
Yes
|
Suppose that you roll a die two times and add up the numbers on the top face for each roll. Since the faces on the die represent the integers 1 through 6, the sum must be between 2 and 12. How many ways can any one of these sums be obtained?
|
Obviously, 2 can be obtained only one way, with two 1's. There are two sequences that yield a sum of 3: 1-2 and 2-1. To obtain all of the frequencies with which the numbers 2 through 12 can be obtained, we set up the situation as follows. For \( j = 1,2;{P}_{j} \) is the rolling of the die for the \( {j}^{\text{th }} \) time. \( {X}_{j} = \{ 1,2,\ldots ,6\} \) and \( {Q}_{j} : {X}_{j} \rightarrow \{ 0,1,2,3,\ldots \} \) is defined by \( {Q}_{j}\left( x\right) = x \) . Since each number appears on a die exactly once, the frequency function is \( {F}_{j}\left( k\right) = 1 \) if \( 1 \leq k \leq 6 \), and \( {F}_{j}\left( k\right) = 0 \) otherwise. The process of rolling the die two times is quantified by adding up the \( {Q}_{j}{}^{\prime }s \) ; that is, \( Q\left( {{a}_{1},{a}_{2}}\right) = {Q}_{1}\left( {a}_{1}\right) + {Q}_{2}\left( {a}_{2}\right) \) . The generating function for the frequency function of rolling the die two times is then\n\n\[ G\left( {F;z}\right) = G\left( {{F}_{1};z}\right) G\left( {{F}_{2};z}\right) \]\n\n\[ = {\left( {z}^{6} + {z}^{5} + {z}^{4} + {z}^{3} + {z}^{2} + z\right) }^{2} \]\n\n\[ = {z}^{12} + 2{z}^{11} + 3{z}^{10} + 4{z}^{9} + 5{z}^{8} + 6{z}^{7} + 5{z}^{6} + 4{z}^{5} + 3{z}^{4} + 2{z}^{3} + {z}^{2} \]\n\nNow, to get \( F\left( k\right) \), just read the coefficient of \( {z}^{k} \) . For example, the coefficient of \( {z}^{5} \) is 4, so there are four ways to roll a total of 5 .
|
Yes
|
Example 8.5.15 Distribution of a Committee. Suppose that an organization is divided into three geographic sections, A, B, and C. Suppose that an executive committee of 11 members must be selected so that no more than 5 members from any one section are on the committee and that Sections A, B, and C must have minimums of 3,2, and 2 members, respectively, on the committee. Looking only at the number of members from each section on the committee, how many ways can the committee be made up? One example of a valid committee would be 4A’s, 4B’s, and 3C’s.
|
Let \( {P}_{A} \) be the action of deciding how many members (not who) from Section A will serve on the committee. \( {X}_{A} = \{ 3,4,5\} \) and \( {Q}_{A}\left( k\right) = k \) . The frequency function, \( {F}_{A} \), is defined by \( {F}_{A}\left( k\right) = 1 \) if \( k \in {X}_{k} \), with \( {F}_{A}\left( k\right) = 0 \) otherwise. \( G\left( {{F}_{A};z}\right) \) is then \( {z}^{3} + {z}^{4} + {z}^{5} \) . Similarly, \( G\left( {{F}_{B};z}\right) = {z}^{2} + {z}^{3} + {z}^{4} + {z}^{5} = \) \( G\left( {{F}_{C};z}\right) \) . Since the committee must have 11 members, our answer will be the coefficient of \( {z}^{11} \) in \( G\left( {{F}_{A};z}\right) G\left( {{F}_{B};z}\right) G\left( {{F}_{C};z}\right) \), which is 10 .
|
Yes
|
Example 9.1.9 An Undirected Graph. A network of computers can be described easily using a graph. Figure 9.1.10 describes a network of five computers, \( a, b, c, d \), and \( e \) . An edge between any two vertices indicates that direct two-way communication is possible between the two computers. Note that the edges of this graph are not directed. This is due to the fact that the relation that is being displayed is symmetric (i.e., if \( X \) can communicate with \( Y \), then \( Y \) can communicate with \( X \) ). Although directed edges could be used here, it would simply clutter the graph.
|
This undirected graph, in set terms, is \( V = \{ a, b, c, d, e\} \) and \( E = \{ \{ a, b\} ,\{ a, d\} ,\{ b, c\} ,\{ b, d\} ,\{ c, e\} ,\{ b, e\} \}
|
Yes
|
Suppose that a path between two vertices has an edge list \( \left( {{e}_{1},{e}_{2},\ldots ,{e}_{n}}\right) \) . A subpath of this graph is any portion of the path described by one or more consecutive edges in the edge list. For example, \( \left( {3,\mathrm{{No}},4}\right) \) is a subpath of \( \left( {1,2,3,\mathrm{{No}},4,3,\mathrm{{Yes}}}\right) \) . Any path is its own subpath; however, we call it an improper subpath of itself. All other nonempty subpaths are called proper subpaths.
|
A path or circuit is simple if it contains no proper subpath that is a circuit. This is the same as saying that a path or circuit is simple if it does not visit any vertex more than once except for the common initial and terminal vertex in the circuit. In the problem-solving method described in Figure 9.1.14, the path that you take is simple only if you reach a solution on the first try.
|
No
|
If you ignore the duplicate names of vertices in the four graphs of Figure 9.1.18, and consider the whole figure as one large graph, then there are four connected components in that graph.
|
It's as simple as that! It's harder to describe precisely than to understand the concept.
|
No
|
A Very Small Example. We consider the representation of the following graph:
|
The adjacency matrix that represents the graph would be\n\n\[ \nG = \left( \begin{array}{llll} 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 \end{array}\right) \]\n\nThe same graph could be represented with the edge dictionary\n\n\[ \n\{ 1 : \left\lbrack {2,4}\right\rbrack ,2 : \left\lbrack {3,4}\right\rbrack ,3 : \left\lbrack 3\right\rbrack ,4 : \left\lbrack 1\right\rbrack \} \text{.} \]\n\nFinally, a list of edges \( \left\lbrack {\left( {1,2}\right) ,\left( {1,4}\right) ,\left( {2,3}\right) ,\left( {2,4}\right) ,\left( {3,3}\right) ,\left( {4,1}\right) }\right\rbrack \) also describes the same graph.
|
Yes
|
Theorem 9.3.2 Maximal Path Theorem. If a graph has \( n \) vertices and vertex \( u \) is connected to vertex \( w \), then there exists a path from \( u \) to \( w \) of length no more than \( n \) .
|
Proof. (Indirect): Suppose \( u \) is connected to \( w \), but the shortest path from \( u \) to \( w \) has length \( m \), where \( m > n \) . A vertex list for a path of length \( m \) will have \( m + 1 \) vertices. This path can be represented as \( \left( {{v}_{0},{v}_{1},\ldots ,{v}_{m}}\right) \), where \( {v}_{0} = u \) and \( {v}_{m} = w \) . Note that since there are only \( n \) vertices in the graph and \( m \) vertices are listed in the path after \( {v}_{0} \), we can apply the pigeonhole principle and be assured that there must be some duplication in the last \( m \) vertices of the vertex list, which represents a circuit in the path. This means that our path of minimum length can be reduced, which is a contradiction.
|
Yes
|
Example 9.3.10 A simple example. Consider the graph below. The existence of a path from vertex 2 to vertex 3 is not difficult to determine by examination. After a few seconds, you should be able to find two paths of length four. Algorithm 9.3.8 will produce one of them.
|
Suppose that the edges from each vertex are sorted in ascending order by terminal vertex. For example, the edges from vertex 3 would be in the order \( \left( {3,1}\right) ,\left( {3,4}\right) ,\left( {3,5}\right) \) . In addition, assume that in the body of Step 4 of the algorithm, the elements of \( {D}_{r} \) are used in ascending order. Then at the end of Step 4, the value of \( V \) will be\n\n\n\nTherefore, the path \( \left( {2,1,4,6,3}\right) \) is produced by the algorithm. Note that if we wanted a path from 2 to 5, the information in \( V \) produces the path \( \left( {2,1,5}\right) \) since \( {V}_{k} \) .from \( = 1 \) and \( {V}_{1} \) .from \( = 2 \) . A shortest circuit that initiates at vertex\n\n2 is also available by noting that \( {V}_{2} \) .from \( = 4,{V}_{4} \) .from \( = 1 \), and \( {V}_{1} \) .from \( = 2 \) ; thus the circuit \( \left( {2,1,4,2}\right) \) is the output of the algorithm.
|
Yes
|
If we compute all distances between vertices, we can summarize the results in a distance matrix, where the entry in row \( i \), column \( j \) is the distance from vertex \( i \) to vertex \( j \) . For the graph in Example 9.3.12, that matrix is\n\n\[ \left( \begin{matrix} 0 & 2 & 2 & 2 & 3 & 1 & 1 & 3 & 3 & 1 & 2 & 2 \\ 2 & 0 & 3 & 3 & 2 & 2 & 1 & 4 & 1 & 2 & 2 & 1 \\ 2 & 3 & 0 & 2 & 5 & 3 & 2 & 3 & 4 & 1 & 4 & 3 \\ 2 & 3 & 2 & 0 & 3 & 1 & 2 & 1 & 3 & 1 & 2 & 3 \\ 3 & 2 & 5 & 3 & 0 & 2 & 3 & 4 & 1 & 4 & 1 & 3 \\ 1 & 2 & 3 & 1 & 2 & 0 & 1 & 2 & 2 & 2 & 1 & 2 \\ 1 & 1 & 2 & 2 & 3 & 1 & 0 & 3 & 2 & 1 & 2 & 1 \\ 3 & 4 & 3 & 1 & 4 & 2 & 3 & 0 & 4 & 2 & 3 & 4 \\ 3 & 1 & 4 & 3 & 1 & 2 & 2 & 4 & 0 & 3 & 1 & 2 \\ 1 & 2 & 1 & 1 & 4 & 2 & 1 & 2 & 3 & 0 & 3 & 2 \\ 2 & 2 & 4 & 2 & 1 & 1 & 2 & 3 & 1 & 3 & 0 & 3 \\ 2 & 1 & 3 & 3 & 3 & 2 & 1 & 4 & 2 & 2 & 3 & 0 \end{matrix}\right) \]
|
If we scan the matrix, we can see that the maximum distance is the distance between vertices 3 and 5 , which is 5 and is the diameter of the graph. If we focus on individual rows and identify the maximum values, which are the eccentricities, their minimum is 3 , which the graph's radius. This eccentricity value is attained by vertices in the set \( \{ 1,4,6,7\} \), which is the center of the graph.
|
Yes
|
Theorem 9.4.3 Euler's Theorem: Koenigsberg Case. No walking tour of Koenigsberg can be designed so that each bridge is used exactly once.
|
Proof. The map of Koenigsberg can be represented as an undirected multigraph, as in Figure 9.4.2. The four land masses are the vertices and each edge represents a bridge.\n\nThe desired tour is then a path that uses each edge once and only once. Since the path can start and end at two different vertices, there are two remaining vertices that must be intermediate vertices in the path. If \( x \) is an intermediate vertex, then every time that you visit \( x \), you must use two of its incident edges, one to enter and one to exit. Therefore, there must be an even number of edges connecting \( x \) to the other vertices. Since every vertex in the Koenigsberg graph has an odd number of edges, no tour of the type that is desired is possible.
|
Yes
|
A common problem encountered in engineering is that of analog-to-digital (a-d) conversion, where the reading on a dial, for example, must be converted to a numerical value. In order for this conversion to be done reliably and quickly, one must solve an interesting problem in graph theory. Suppose a dial can be turned in any direction, and that the positions will be converted to one of the numbers zero through seven as depicted in Figure 9.4.19. The angles from 0 to 360 are divided into eight equal parts, and each part is assigned a number starting with 0 and increasing clockwise. If the dial points in any of these sectors the conversion is to the number of that sector. If the dial is on the boundary, then we will be satisfied with the conversion to either of the numbers in the bordering sectors. This conversion can be thought of as giving an approximate angle of the dial, for if the dial is in sector \( k \), then the angle that the dial makes with east is approximately \( {45}{k}^{ \circ } \) .
|
All digital computers represent numbers in binary form, as a sequence of 0 's and 1's called bits, short for binary digits. The binary representations of numbers 0 through 7 are:\n\n\[ 0 = {000}_{\text{two }} = 0 \cdot 4 + 0 \cdot 2 + 0 \cdot 1 \]\n\n\[ 1 = {001}_{\text{two }} = 0 \cdot 4 + 0 \cdot 2 + 1 \cdot 1 \]\n\n\[ 2 = {010}_{\text{two }} = 0 \cdot 4 + 1 \cdot 2 + 0 \cdot 1 \]\n\n\[ 3 = {011}_{\text{two }} = 0 \cdot 4 + 1 \cdot 2 + 1 \cdot 1 \]\n\n\[ 4 = {100}_{\text{two }} = 1 \cdot 4 + 0 \cdot 2 + 0 \cdot 1 \]\n\n\[ 5 = {101}_{\text{two }} = 1 \cdot 4 + 0 \cdot 2 + 1 \cdot 1 \]\n\n\[ 6 = {110}_{\text{two }} = 1 \cdot 4 + 1 \cdot 2 + 0 \cdot 1 \]\n\n\[ 7 = {111}_{\text{two }} = 1 \cdot 4 + 1 \cdot 2 + 1 \cdot 1 \]\n\nThe way that we could send those bits to a computer is by coating parts of the back of the dial with a metallic substance, as in Figure 9.4.20. For each of the three concentric circles on the dial there is a small magnet. If a magnet lies under a part of the dial that has been coated with metal, then it will turn a switch ON, whereas the switch stays OFF when no metal is detected above a magnet. Notice how every ON/OFF combination of the three switches is possible given the way the back of the dial is coated.\n\nIf the dial is placed so that the magnets are in the middle of a sector, we expect this method to work well. There is a problem on certain boundaries, however. If the dial is turned so that the magnets are between sectors three and four, for example, then it is unclear what the result will be. This is due to the fact that each magnet will have only a fraction of the required metal above it to turn its switch ON. Due to expected irregularities in the coating of the dial, we can be safe in saying that for each switch either ON or OFF could be the result, and so if the dial is between sectors three and four, any number could be indicated. This problem does not occur between every sector. For example, between sectors 0 and 1 , there is only one switch that cannot be predicted. No matter what the outcome is for the units switch in this case, the indicated sector must be either 0 or 1 . This consistent with the original objective that a positioning of the dial on a boundary of two sectors should produce the number of either sector.
|
Yes
|
The problem of a Boston salesman. The Traveling Salesman Problem gets its name from the situation of a salesman who wants to minimize the number of miles that he travels in visiting his customers. For example, if a salesman from Boston must visit the other capital cities of New England, then the problem is to find a circuit in the weighted graph of Example 9.5.2. Note that distance and cost are clearly related in this case. In addition, tolls and traffic congestion might also be taken into account.
|
The search for an efficient algorithm that solves the Traveling Salesman has occupied researchers for years. If the graph in question is complete, there are \( \left( {n - 1}\right) \) ! different circuits. As \( n \) gets large, it is impossible to check every possible circuit. The most efficient algorithms for solving the Traveling Salesman Problem take an amount of time that is proportional to \( n{2}^{n} \) . Since this quantity grows so quickly, we can't expect to have the time to solve the Traveling Salesman Problem for large values of \( n \) . Most of the useful algorithms that have been developed have to be heuristic; that is, they find a circuit that should be close to the optimal one. One such algorithm is the \
|
No
|
Example 9.5.8 The One-way Street. A salesman must make stops at vertices A, B, and C, which are all on the same one-way street. The graph in Figure 9.5.9 is weighted by the function \( w\left( {i, j}\right) \) equal to the time it takes to drive from vertex \( i \) to vertex \( j \) .
|
Note that if \( j \) is down the one-way street from \( i \), then \( w\left( {i, j}\right) < w\left( {j, i}\right) \) . The values of \( {C}_{opt} \), and \( {C}_{cn} \) are 20 and 32, respectively. Verify that \( {C}_{cn} \) is 32 by using the closest neighbor algorithm. The value of \( \frac{{C}_{cn}}{{C}_{opt}} = {1.6} \) is significant in this case since our salesman would spend 60 percent more time on the road if he used the closest neighbor algorithm.
|
No
|
Example 9.5.12 The Unit Square Problem. Suppose a robot is programmed to weld joints on square metal plates. Each plate must be welded at prescribed points on the square. To minimize the time it takes to complete the job, the total distance that a robot's arm moves should be minimized. Let \( d\left( {P, Q}\right) \) be the distance between \( P \) and \( Q \) . Assume that before each plate can be welded, the arm must be positioned at a certain point \( {P}_{0} \) . Given a list of \( n \) points, we want to put them in order so that\n\n\[ d\left( {{P}_{0},{P}_{1}}\right) + d\left( {{P}_{1},{P}_{2}}\right) + \cdots + d\left( {{P}_{n - 1},{P}_{n}}\right) + d\left( {{P}_{n},{P}_{0}}\right) \]\n\nis as small as possible.
|
Heuristic 9.5.13 The Strip Algorithm. Given n points in the unit square: Phase 1:\n\n(1) Divide the square into \( \left\lceil \sqrt{n/2}\right\rceil \) vertical strips, as in Figure 9.5.14. Let \( d \) be the width of each strip. If a point lies on a boundary between two strips, consider it part of the left-hand strip.\n\n(2) Starting from the left, find the first strip that contains one of the points. Locate the starting point by selecting the first point that is encountered in that strip as you travel from bottom to top. We will assume that the first point is \( \left( {{x}_{1},{y}_{1}}\right) \)\n\n(3) Alternate traveling up and down the strips that contain vertices until all of the vertices have been reached.\n\n(4) Return to the starting point.\n\nPhase 2:\n\n(1) Shift all strips \( d/2 \) units to the right (creating a small strip on the left).\n\n(2) Repeat Steps 1.2 through 1.4 of Phase 1 with the new strips.\n\nWhen the two phases are complete, choose the shorter of the two circuits obtained.
|
Yes
|
Theorem 9.5.20 Flow out of Source equals Flow in Sink. If \( f \) is a flow, then \( \;\mathop{\sum }\limits_{{\left( {\text{source }, v}\right) \in E}}f\left( {\text{source }, v}\right) = \mathop{\sum }\limits_{{\left( {v,\text{ sink }}\right) \in E}}f\left( {v,\text{ sink }}\right) \)
|
Proof. Subtract the right-hand side of (9.5.1) from the left-hand side. The result is:\n\n\[ \text{Flow into}v - \text{Flow out of}v = 0 \]\n\nNow sum up these differences for each vertex in \( {V}^{\prime } = V - \{ \) source, sink \( \} \) . The result is\n\n\[ \mathop{\sum }\limits_{{v \in {V}^{\prime }}}\left( {\mathop{\sum }\limits_{{\left( {x, v}\right) \in E}}f\left( {x, v}\right) - \mathop{\sum }\limits_{{\left( {v, y}\right) \in E}}f\left( {v, y}\right) }\right) = 0 \]\n\n\( \left( {9.5.2}\right) \)\n\nNow observe that if an edge connects two vertices in \( {V}^{\prime } \), its flow appears as both a positive and a negative term in (9.5.2). This means that the only positive terms that are not cancelled out are the flows into the sink. In addition, the only negative terms that remain are the flows out of the source. Therefore,\n\n\[ \mathop{\sum }\limits_{{\left( {v,\operatorname{sink}}\right) \in E}}f\left( {v,\operatorname{sink}}\right) - \mathop{\sum }\limits_{{\left( {\text{source }, v}\right) \in E}}f\left( {\text{source }, v}\right) = 0 \]
|
Yes
|
Example 9.5.23 Augmenting City Water Flow. For \( {f}_{1} \) in Figure 9.5.18, a flow augmenting path would be \( \left( {{e}_{2},{e}_{3},{e}_{4}}\right) \) since \( w\left( {e}_{2}\right) - {f}_{1}\left( {e}_{2}\right) = {15}, w\left( {e}_{3}\right) - \) \( {f}_{1}\left( {e}_{3}\right) = 5 \), and \( w\left( {e}_{4}\right) - {f}_{1}\left( {e}_{4}\right) = 5 \) .
|
These positive differences represent unused capacities, and the smallest value represents the amount of flow that can be added to each edge in the path. Note that by adding 5 to each edge in our path, we obtain \( {f}_{2} \), which is maximal. If an edge with a positive flow is used in its reverse direction, it is contributing a movement of material that is counterproductive to the objective of maximizing flow. This is why the algorithm directs us to decrease the flow through that edge.
|
Yes
|
Example 9.5.27 A flow augmenting path going against the flow. Consider the network in Figure 9.5.28, where the current flow, \( f \), is indicated by a labeling of the edges.
|
The path (Source, \( {v}_{2},{v}_{1},{v}_{3},\operatorname{Sink} \) ) is a flow augmenting path that allows us to increase the flow by one unit. Note that \( \left( {{v}_{1},{v}_{3}}\right) \) is used in the reverse direction, which is allowed because \( f\left( {{v}_{1},{v}_{3}}\right) > 0 \) . The value of the new flow that we obtain is 8 . This flow must be maximal since the capacities out of the source add up to 8 . This maximal flow is defined by Figure 9.5.29.
|
Yes
|
Theorem 9.6.8 Euler’s Formula. If \( G = \left( {V, E}\right) \) is a connected planar graph with \( r \) regions, \( v \) vertices, and \( e \) edges, then\n\n\[ v + r - e = 2 \]
|
Proof. We prove Euler’s Formula by Induction on \( e \), for \( e \geq 0 \). \n\nBasis: If \( e = 0 \), then \( G \) must be a graph with one vertex, \( v = 1 \) ; and there is one infinite region, \( r = 1 \) . Therefore, \( v + r - e = 1 + 1 - 0 = 2 \), and the basis is true.\n\nInduction: Suppose that \( G \) has \( k \) edges, \( k \geq 1 \), and that all connected planar graphs with less than \( k \) edges satisfy (9.6.1). Select any edge that is part of the boundary of the infinite region and call it \( {e}_{1} \) . Let \( {G}^{\prime } \) be the graph obtained from \( G \) by deleting \( {e}_{1} \) . Figure 9.6.9 illustrates the two different possibilities we need to consider: either \( {G}^{\prime } \) is connected or it has two connected components, \( {G}_{1} \) and \( {G}_{2} \).\n\nIf \( {G}^{\prime } \) is connected, the induction hypothesis can be applied to it. If \( {G}^{\prime } \) has \( {v}^{\prime } \) vertices, \( {r}^{\prime } \) edges and \( {e}^{\prime } \) edges, then \( {v}^{\prime } + {r}^{\prime } - {e}^{\prime } = 2 \) and in terms of the corresponding numbers for \( G \) ,\n\n\( {v}^{\prime } = v\; \) No vertices were removed to form \( {G}^{\prime } \n\n\( {r}^{\prime } = r - 1\; \) One region of \( G \) was merged with the infinite region when \( {e}_{1} \) was removed\n\n\( {e}^{\prime } = k - 1\; \) We assumed that \( G \) had \( k \) edges.\n\nFor the case where \( {G}^{\prime } \) is connected,\n\n\[ v + r - e = v + r - k \]\n\n\[ = {v}^{\prime } + \left( {{r}^{\prime } + 1}\right) - \left( {{e}^{\prime } + 1}\right) \]\n\n\[ = {v}^{\prime } + {r}^{\prime } - {e}^{\prime } \]\n\n\[ = 2 \]\n\nIf \( {G}^{\prime } \) is not connected, it must consist of two connected components, \( {G}_{1} \) and \( {G}_{2} \), since we started with a connected graph, \( G \) . We can apply the induction hypothesis to each of the two components to complete the proof. We leave it to the students to do this, with the reminder that in counting regions, \( {G}_{1} \) and \( {G}_{2} \) will share the same infinite region.
|
No
|
Theorem 9.6.10 A Bound on Edges of a Planar Graph. If \( G = \left( {V, E}\right) \) is a connected planar graph with \( v \) vertices, \( v \geq 3 \), and \( e \) edges, then\n\n\[ e \leq {3v} - 6 \]
|
Proof. (Outline of a Proof)\n\n(a) Let \( r \) be the number of regions in \( G \) . For each region, count the number of edges that comprise its border. The sum of these counts must be at least \( {3r} \) . Recall that we are working with simple graphs here, so a region made by two edges connecting the same two vertices is not possible.\n\n(b) Based on (a), infer that the number of edges in \( G \) must be at least \( \frac{3r}{2} \).\n\n(c) \( e \geq \frac{3r}{2} \Rightarrow r \leq \frac{2e}{3} \)\n\n(d) Substitute \( \frac{2e}{3} \) for \( r \) in Euler’s Formula to obtain an inequality that is equivalent to (9.6.2)
|
No
|
Theorem 9.6.12 A Vertex of Degree Five. If \( G \) is a connected planar graph, then it has a vertex with degree 5 or less.
|
Proof. (by contradiction): We can assume that \( G \) has at least seven vertices, for otherwise the degree of any vertex is at most 5 . Suppose that \( G \) is a connected planar graph and each vertex has a degree of 6 or more. Then, since each edge contributes to the degree of two vertices, \( e \geq \frac{6v}{2} = {3v} \) . However, Theorem 9.6.10 states that the \( e \leq {3v} - 6 < {3v} \), which is a contradiction.
|
Yes
|
Theorem 9.6.16 The Five-Color Theorem. If \( G \) is a planar graph, then \( \chi \left( G\right) \leq 5 \) .
|
Proof. The number 5 is not a sharp upper bound for \( \chi \left( G\right) \) because of the Four-Color Theorem.\n\nThis is a proof by Induction on the Number of Vertices in the Graph.\n\nBasis: Clearly, a graph with one vertex has a chromatic number of 1 .\n\nInduction: Assume that all planar graphs with \( n - 1 \) vertices have a chromatic number of 5 or less. Let \( G \) be a planar graph. By Theorem 9.6.12, there exists a vertex \( v \) with \( \deg v \leq 5 \) . Let \( G - v \) be the planar graph obtained by deleting \( v \) and all edges that connect \( v \) to other vertices in \( G \) . By the induction hypothesis, \( G - v \) has a 5-coloring. Assume that the colors used are red, white, blue, green, and yellow.\n\nIf \( \deg v < 5 \), then we can produce a 5-coloring of \( G \) by selecting a color that is not used in coloring the vertices that are connected to \( v \) with an edge in \( G \) .\n\nIf \( \deg v = 5 \), then we can use the same approach if the five vertices that are adjacent to \( v \) are not all colored differently. We are now left with the possibility that \( {v}_{1},{v}_{2},{v}_{3},{v}_{4} \), and \( {v}_{5} \) are all connected to \( v \) by an edge and they are all colored differently. Assume that they are colored red, white blue, yellow, and green, respectively, as in Figure 9.6.17.\n\n\n\nStarting at \( {v}_{1} \) in \( G - v \), suppose we try to construct a path \( {v}_{3} \) that passes through only red and blue vertices. This can either be accomplished or it can't be accomplished. If it can’t be done, consider all paths that start at \( {v}_{1} \), and go through only red and blue vertices. If we exchange the colors of the vertices in these paths, including \( {v}_{1} \) we still have a 5-coloring of \( G - v \) . Since \( {v}_{1} \) is now blue, we can color the central vertex, \( v \), red.\n\nFinally, suppose that \( {v}_{1} \) is connected to \( {v}_{3} \) using only red and blue vertices. Then a path from \( {v}_{1} \) to \( {v}_{3} \) by using red and blue vertices followed by the edges \( \left( {{v}_{3}, v}\right) \) and \( \left( {v,{v}_{1}}\right) \) completes a circuit that either encloses \( {v}_{2} \) or encloses \( {v}_{4} \) and \( {v}_{5} \) . Therefore, no path from \( {v}_{2} \) to \( {v}_{4} \) exists using only white and yellow vertices. We can then repeat the same process as in the previous paragraph with \( {v}_{2} \) and \( {v}_{4} \), which will allow us to color \( \mathrm{v} \) white.
|
Yes
|
Theorem 9.6.20 No Odd Circuits in a Bipartite Graph. An undirected graph is bipartite if and only if it has no circuit of odd length.
|
Proof. \( \left( \Rightarrow \right) \) Let \( G = \left( {V, E}\right) \) be a bipartite graph that is partitioned into two sets, \( \mathrm{R}\left( \mathrm{{ed}}\right) \) and \( \mathrm{B}\left( \mathrm{{lue}}\right) \) that define a 2-coloring. Consider any circuit in \( V \) . If we specify a direction in the circuit and define \( f \) on the vertices of the circuit by\n\n\[ f\left( u\right) = \text{the next vertex in the circuit after}v \]\n\nNote that \( f \) is a bijection. Hence the number of red vertices in the circuit equals the number of blue vertices, and so the length of the circuit must be even.\n\n\( \left( \Leftarrow \right) \) Assume that \( G \) has no circuit of odd length. For each component of \( G \), select any vertex \( w \) and color it red. Then for every other vertex \( v \) in the component, find the path of shortest distance from \( w \) to \( v \) . If the length of the path is odd, color \( v \) blue, and if it is even, color \( v \) red. We claim that this method defines a 2-coloring of \( G \) . Suppose that it does not define a 2-coloring.\n\nThen let \( {v}_{a} \) and \( {v}_{b} \) be two vertices with identical colors that are connected with an edge. By the way that we colored \( G \), neither \( {v}_{a} \) nor \( {v}_{b} \) could equal \( w \) . We can now construct a circuit with an odd length in \( G \) . First, we start at \( w \) and follow the shortest path to \( {v}_{a} \) . Then follow the edge \( \left( {{v}_{a},{v}_{b}}\right) \), and finally, follow the reverse of a shortest path from \( w \) to \( {v}_{b} \) . Since \( {v}_{a} \) and \( {v}_{b} \) have the same color, the first and third segments of this circuit have lengths that are both odd or even, and the sum of their lengths must be even. The addition of the single edge \( \left( {{v}_{a},{v}_{b}}\right) \) shows us that this circuit has an odd length. This contradicts our premise.
|
Yes
|
Lemma 10.1.10 Let \( G = \left( {V, E}\right) \) be an undirected graph with no self-loops, and let \( {v}_{a},{v}_{b} \in V \) . If two different simple paths exist between \( {v}_{a} \) and \( {v}_{b} \), then there exists a cycle in \( G \) .
|
Proof. Let \( {p}_{1} = \left( {{e}_{1},{e}_{2},\ldots ,{e}_{m}}\right) \) and \( {p}_{2} = \left( {{f}_{1},{f}_{2},\ldots ,{f}_{n}}\right) \) be two different simple paths from \( {v}_{a} \) to \( {v}_{b} \) . The first step we will take is to delete from \( {p}_{1} \) and \( {p}_{2} \) the initial edges that are identical. That is, if \( {e}_{1} = {f}_{1},{e}_{2} = {f}_{2},\ldots ,{e}_{j} = {f}_{j} \) , and \( {e}_{j + 1} \neq {f}_{j + 1} \) delete the first \( j \) edges of both paths. Once this is done, both paths start at the same vertex, call it \( {v}_{c} \), and both still end at \( {v}_{b} \) . Now we construct a cycle by starting at \( {v}_{c} \) and following what is left of \( {p}_{1} \) until we first meet what is left of \( {p}_{2} \) . If this first meeting occurs at vertex \( {v}_{d} \), then the remainder of the cycle is completed by following the portion of the reverse of \( {p}_{2} \) that starts at \( {v}_{d} \) and ends at \( {v}_{c} \) .
|
Yes
|
Theorem 10.1.11 Equivalent Conditions for a Graph to be a Tree. Let \( G = \left( {V, E}\right) \) be an undirected graph with no self-loops and \( \left| V\right| = n \) . The following are all equivalent:\n\n(1) \( G \) is a tree.\n\n(2) For each pair of distinct vertices in \( V \), there exists a unique simple path between them.\n\n(3) \( G \) is connected, and if \( e \in E \), then \( \left( {V, E-\{ e\} }\right) \) is disconnected.\n\n(4) \( G \) contains no cycles, but by adding one edge, you create a cycle.\n\n(5) \( G \) is connected and \( \left| E\right| = n - 1 \) .
|
Proof. Proof Strategy. Most of this theorem can be proven by proving the following chain of implications: \( \left( 1\right) \Rightarrow \left( 2\right) ,\left( 2\right) \Rightarrow \left( 3\right) ,\left( 3\right) \Rightarrow \left( 4\right) \), and \( \left( 4\right) \Rightarrow \left( 1\right) \) . Once these implications have been demonstrated, the transitive closure of \( \Rightarrow \) on \( 1,2,3,4 \) establishes the equivalence of the first four conditions. The proof that Statement 5 is equivalent to the first four can be done by induction, which we will leave to the reader.\n\n\( \left( 1\right) \Rightarrow \left( 2\right) \) (Indirect). Assume that \( G \) is a tree and that there exists a pair of vertices between which there is either no path or there are at least two distinct paths. Both of these possibilities contradict the premise that \( G \) is a tree. If no path exists, \( G \) is disconnected, and if two paths exist, a cycle can be obtained by Theorem 10.1.11.\n\n\( \left( 2\right) \Rightarrow \left( 3\right) \) . We now use Statement 2 as a premise. Since each pair of vertices in \( V \) are connected by exactly one path, \( G \) is connected. Now if we select any edge \( e \) in \( E \), it connects two vertices, \( {v}_{1} \) and \( {v}_{2} \) . By (2), there is no simple path connecting \( {v}_{1} \) to \( {v}_{2} \) other than \( e \) . Therefore, no path at all can exist between \( {v}_{1} \) and \( {v}_{2} \) in \( \left( {V, E-\{ e\} }\right) \) . Hence \( \left( {V, E-\{ e\} }\right) \) is disconnected.\n\n\( \left( 3\right) \Rightarrow \left( 4\right) \) . Now we will assume that Statement 3 is true. We must show that \( G \) has no cycles and that adding an edge to \( G \) creates a cycle. We will use an indirect proof for this part. Since (4) is a conjunction, by DeMorgan's Law its negation is a disjunction and we must consider two cases. First, suppose that \( G \) has a cycle. Then the deletion of any edge in the cycle keeps the graph connected, which contradicts (3). The second case is that the addition of an edge to \( G \) does not create a cycle. Then there are two distinct paths between the vertices that the new edge connects. By Lemma 10.1.10, a cycle can then be created, which is a contradiction.\n\n\( \left( 4\right) \Rightarrow \left( 1\right) \) Assume that \( G \) contains no cycles and that the addition of an edge creates a cycle. All that we need to prove to verify that \( G \) is a tree is that \( G \) is connected. If it is not connected, then select any two vertices that are not connected. If we add an edge to connect them, the fact that a cycle is created implies that a second path between the two vertices can be found which is in the original graph, which is a contradiction.
|
No
|
Theorem 10.2.6 Let \( G = \left( {V, E, w}\right) \) be a weighted connected undirected graph. Let \( V \) be partitioned into two sets \( L \) and \( R \) . If \( {e}^{ * } \) is a bridge of least weight between \( L \) and \( R \), then there exists a minimal spanning tree for \( G \) that includes \( {e}^{ * } .
|
Proof. Suppose that no minimal spanning tree including \( {e}^{ * } \) exists. Let \( T = \) \( \left( {V,{E}^{\prime }}\right) \) be a minimal spanning tree. If we add \( {e}^{ * } \) to \( T \), a cycle is created, and this cycle must contain another bridge, \( e \), between \( L \) and \( R \) . Since \( w\left( {e}^{ * }\right) \leq \) \( w\left( e\right) \), we can delete \( e \) and the new tree, which includes \( {e}^{ * } \) must also be a minimal spanning tree.
|
Yes
|
Example 10.2.11 A Small Example. Consider the graph in Figure 10.2.12. If we apply Prim’s Algorithm starting at \( a \), we obtain the following edge list in the order given: \( \{ a, f\} ,\{ f, e\} ,\{ e, c\} ,\{ c, d\} ,\{ f, b\} ,\{ b, g\} \) . The total of the weights of these edges is 20 .
|
The method that we have used (in Step 2.1) to select a bridge when more than one minimally weighted bridge exists is to order all bridges alphabetically by the vertex in \( L \) and then, if further ties exist, by the vertex in \( R \) . The first vertex in that order is selected in Step 2.1 of the algorithm.
|
No
|
The Case for Complete Graphs. The Minimum Diameter Spanning Tree Problem is trivial to solve in a \( {K}_{n} \) . Select any vertex \( {v}_{0} \) and construct the spanning tree whose edge set is the set of edges that connect \( {v}_{0} \) to the other vertices in the \( {K}_{n} \) .
|
Figure 10.2.15 illustrates a solution for \( n = 5 \) .
|
No
|
Binary Tree Sort. Given a collection of integers (or other objects than can be ordered), one technique for sorting is a binary tree sort. If the integers are \( {a}_{1},{a}_{2},\ldots ,{a}_{n}, n \geq 1 \), we first execute the following algorithm that creates a binary tree:
|
Algorithm 10.4.8 Binary Sort Tree Creation.\n\n(1) Insert \( {a}_{1} \) into the root of the tree.\n\n(2) For \( k \mathrel{\text{:=}} 2 \) to \( n// \) insert \( {a}_{k} \) into the tree\n\n(a) \( r = {a}_{1} \)\n\n(b) inserted \( = \) false\n\n(c) while not(inserted):\n\nif \( {a}_{k} < r \) :\n\nif \( r \) has a left child:\n\n\( r = \) left child of \( r \)\n\nelse:\n\nmake \( {a}_{k} \) the left child of \( r \)\n\ninserted \( = \) true\n\nelse:\n\nif \( r \) has a right child:\n\n\( r = \) right child of \( r \)\n\nelse:\n\nmake \( {a}_{k} \) the right child of \( r \)\n\ninserted \( = \) true\n\nIf the integers to be sorted are \( {25},{17},9,{20},{33},{13} \), and 30, then the tree that is created is the one in Figure 10.4.9. The inorder traversal of this tree is \( 9,{13},{17},{20},{25},{30},{33} \), the integers in ascending order. In general, the inorder traversal of the tree that is constructed in the algorithm above will produce a sorted list. The preorder and postorder traversals of the tree have no meaning here.
|
Yes
|
Theorem 11.2.1 A Monoid Theorem. If \( a, b \) are elements of \( M \) and \( a * b = b * a \), then \( \left( {a * b}\right) * \left( {a * b}\right) = \left( {a * a}\right) * \left( {b * b}\right) \).
|
Proof.\n\n\[ \left( {a * b}\right) * \left( {a * b}\right) = a * \left( {b * \left( {a * b}\right) }\right) \;\text{ Why? } \]\n\n\[ = a * \left( {\left( {b * a}\right) * b}\right) \;\text{ Why? } \]\n\n\[ = a * \left( {\left( {a * b}\right) * b}\right) \;\text{ Why? } \]\n\n\[ = a * \left( {a * \left( {b * b}\right) }\right) \;\text{ Why? } \]\n\n\[ = \left( {a * a}\right) * \left( {b * b}\right) \;\text{ Why? } \]
|
No
|
Consider the set of \( 2 \times 2 \) real matrices, \( {M}_{2 \times 2}\left( \mathbb{R}\right) \), with the operation of matrix multiplication. In this context, Theorem 11.2.1 can be interpreted as saying that if \( {AB} = {BA} \), then \( {\left( AB\right) }^{2} = {A}^{2}{B}^{2} \).
|
One pair of matrices that this theorem applies to is \( \left( \begin{array}{ll} 2 & 1 \\ 1 & 2 \end{array}\right) \) and \( \left( \begin{matrix} 3 & - 4 \\ - 4 & 3 \end{matrix}\right) \).
|
No
|
Theorem 11.3.2 Identities are Unique - Rephrased. If \( G = \left\lbrack {G; * }\right\rbrack \) is a group and \( e \) is an identity of \( G \), then no other element of \( G \) is an identity of \( G \) .
|
Proof. (Indirect): Suppose that \( f \in G, f \neq e \), and \( f \) is an identity of \( G \) . We will show that \( f = e \), which is a contradiction, completing the proof.\n\n\[ f = f * e\;\text{Since}e\text{is an identity} \]\n\n\[ = e\;\text{Since}f\text{is an identity} \]
|
Yes
|
Theorem 11.3.6 Inverse of Inverse Theorem (Rephrased). If a has inverse \( b \) and \( b \) has inverse \( c \), then \( a = c \) .
|
Proof.\n\n\[ a = a * e\;e\text{ is the identity of }G \]\n\n\[ = a * \left( {b * c}\right) \;\text{because}c\text{is the inverse of}b \]\n\n\[ = \left( {a * b}\right) * c\;\text{ why? } \]\n\n\[ = e * c\;\text{why?} \]\n\n\[ = c\;\text{by the identity property} \]
|
No
|
Theorem 11.3.7 Inverse of a Product. If \( a \) and \( b \) are elements of group \( G \), then \( {\left( a * b\right) }^{-1} = {b}^{-1} * {a}^{-1} \) .
|
Proof. Let \( x = {b}^{-1} * {a}^{-1} \) . We will prove that \( x \) inverts \( a * b \) . Since we know\n\nthat the inverse is unique, we will have proved the theorem.\n\n\[ \left( {a * b}\right) * x = \left( {a * b}\right) * \left( {{b}^{-1} * {a}^{-1}}\right) \]\n\n\[ = a * \left( {b * \left( {{b}^{-1} * {a}^{-1}}\right) }\right) \]\n\n\[ = a * \left( {\left( {b * {b}^{-1}}\right) * {a}^{-1}}\right) \]\n\n\[ = a * \left( {e * {a}^{-1}}\right) \]\n\n\[ = a * {a}^{-1} \]\n\n\[ = e \]\n\nSimilarly, \( x * \left( {a * b}\right) = e \) ; therefore, \( {\left( a * b\right) }^{-1} = x = {b}^{-1} * {a}^{-1} \)
|
Yes
|
Theorem 11.3.8 Cancellation Laws. If \( a, b \), and \( c \) are elements of group \( G \), then\n\n\[ \n\text{left cancellation:}\;\left( {a * b = a * c}\right) \Rightarrow b = c \n\]\n\n\[ \n\text{right cancellation:}\;\left( {b * a = c * a}\right) \Rightarrow b = c \n\]
|
Proof. We will prove the left cancellation law. The right law can be proved in exactly the same way. Starting with \( a * b = a * c \), we can operate on both \( a * b \) and \( a * c \) on the left with \( {a}^{-1} \) :\n\n\[ \n{a}^{-1} * \left( {a * b}\right) = {a}^{-1} * \left( {a * c}\right) \n\]\n\nApplying the associative property to both sides we get\n\n\[ \n\left( {{a}^{-1} * a}\right) * b = \left( {{a}^{-1} * a}\right) * c \Rightarrow e * b = e * c \n\]\n\n\[ \n\Rightarrow b = c \n\]
|
Yes
|
Theorem 11.3.9 Linear Equations in a Group. If \( G \) is a group and \( a, b \in G \), the equation \( a * x = b \) has a unique solution, \( x = {a}^{-1} * b \) . In addition, the equation \( x * a = b \) has a unique solution, \( x = b * {a}^{-1} \) .
|
Proof. We prove the theorem only for \( a * x = b \), since the second statement is proven identically.\n\n\[ a * x = b = e * b \]\n\n\[ = \left( {a * {a}^{-1}}\right) * b \]\n\n\[ = a * \left( {{a}^{-1} * b}\right) \]\n\nBy the cancellation law, we can conclude that \( x = {a}^{-1} * b \) .\n\nIf \( c \) and \( d \) are two solutions of the equation \( a * x = b \), then \( a * c = b = a * d \) and, by the cancellation law, \( c = d \) . This verifies that \( {a}^{-1} * b \) is the only solution of \( a * x = b \) .
|
Yes
|
In the group of positive real numbers with multiplication, compute \( {5}^{3} \) and \( {5}^{-3} \).
|
\[ {5}^{3} = {5}^{2} \cdot 5 = \left( {{5}^{1} \cdot 5}\right) \cdot 5 = \left( {\left( {{5}^{0} \cdot 5}\right) \cdot 5}\right) \cdot 5 = \left( {\left( {1 \cdot 5}\right) \cdot 5}\right) \cdot 5 = 5 \cdot 5 \cdot 5 = {125} \] and \[ {5}^{-3} = {\left( {125}\right) }^{-1} = \frac{1}{125} \]
|
Yes
|
Lemma 11.3.13 Let \( G \) be a group. If \( b \in G \) and \( n \geq 0 \), then \( {b}^{n + 1} = b * {b}^{n} \) , and hence \( b * {b}^{n} = {b}^{n} * b \) .
|
Proof. (By induction): If \( n = 0 \) ,\n\n\( {b}^{1} = {b}^{0} * b \) by the definition of exponentiation\n\n\( = e * b \) by the basis for exponentiation\n\n\( = b * e \) by the identity property\n\n\( = b * {b}^{0} \) by the basis for exponentiation\n\nNow assume the formula of the lemma is true for some \( n \geq 0 \) .\n\n\[ \n{b}^{\left( {n + 1}\right) + 1} = {b}^{\left( n + 1\right) } * b\text{by the definition of exponentiation} \]\n\n\[ \n= \left( {b * {b}^{n}}\right) * b\text{by the induction hypothesis} \]\n\n\[ \n= b * \left( {{b}^{n} * b}\right) \text{associativity} \]\n\n\[ \n= b * \left( {b}^{n + 1}\right) \text{definition of exponentiation} \]\n
|
Yes
|
Theorem 11.3.14 Properties of Exponentiation. If a is an element of a group \( G \), and \( m \) and \( n \) are integers,\n\n(1) \( {a}^{-n} = {\left( {a}^{-1}\right) }^{n} \) and hence \( {\left( {a}^{n}\right) }^{-1} = {\left( {a}^{-1}\right) }^{n} \)\n\n(2) \( {a}^{n + m} = {a}^{n} * {a}^{m} \)\n\n(3) \( {\left( {a}^{n}\right) }^{m} = {a}^{nm} \)
|
Proof. We will leave the proofs of these properties to the reader. All three parts can be done by induction. For example the proof of the second part would start by defining the proposition \( p\left( m\right), m \geq 0 \), to be \( {a}^{n + m} = {a}^{n} * {a}^{m} \) for all \( n \) . The basis is \( p\left( 0\right) : {a}^{n + 0} = {a}^{n} * {a}^{0} \) .
|
No
|
Theorem 11.3.15 If \( G \) is a finite group, \( \left| G\right| = n \), and \( a \) is an element of \( G \) , then there exists a positive integer \( m \) such that \( {a}^{m} = e \) and \( m \leq n \) .
|
Proof. Consider the list \( a,{a}^{2},\ldots ,{a}^{n + 1} \) . Since there are \( n + 1 \) elements of \( G \) in this list, there must be some duplication. Suppose that \( {a}^{p} = {a}^{q} \), with \( p < q \) . Let \( m = q - p \) . Then\n\n\[ \n{a}^{m} = {a}^{q - p} \n\]\n\n\[ \n= {a}^{q} * {a}^{-p} \n\]\n\n\[ \n= {a}^{q} * {\left( {a}^{p}\right) }^{-1} \n\]\n\n\[ \n= {a}^{q} * {\left( {a}^{q}\right) }^{-1} \n\]\n\n\[ \n= e \n\]\n\nFurthermore, since \( 1 \leq p < q \leq n + 1, m = q - p \leq n \) .
|
Yes
|
Theorem 11.4.9 If \( a \) and \( b \) are positive integers, the smallest positive value of \( {ax} + {by} \) is the greatest common divisor of \( a \) and \( b,\gcd \left( {a, b}\right) \) .
|
Proof. If \( g = \gcd \left( {a, b}\right) \), since \( g\left| {a\text{and}g}\right| b \), we know that \( g \mid \left( {{ax} + {by}}\right) \) for any integers \( x \) and \( y \), so \( {ax} + {by} \) can’t be less than \( g \) . To show that \( g \) is exactly the least positive value, we show that \( g \) can be attained by extending the Euclidean Algorithm. Performing the extended algorithm involves building a table of numbers. The way in which it is built maintains an invariant, and by The Invariant Relation Theorem, we can be sure that the desired values of \( x \) and \( y \) are produced.\n\nTo illustrate the algorithm, Table 11.4.10 displays how to compute \( \gcd \left( {{152},{53}}\right) \) . In the \( r \) column, you will find 152 and 53, and then the successive remainders from division. So each number in \( r \) after the first two is the remainder after dividing the number immediately above it into the next number up. To the left of each remainder is the quotient from the division. In this case the third row of the table tells us that \( {152} = {53} \cdot 2 + {46} \) . The last nonzero value in \( r \) is the greatest common divisor.\n\nTable 11.4.10 The extended Euclidean algorithm to compute \( \gcd \left( {{152},{53}}\right) \n\n<table><tr><td rowspan=\
|
Yes
|
Theorem 11.4.15 Additive Inverses in \( {\mathbb{Z}}_{n} \) . If \( a \in {\mathbb{Z}}_{n}, a \neq 0 \), then the additive inverse of \( a \) is \( n - a \) .
|
Proof. \( a + \left( {n - a}\right) = n \equiv 0\left( {\;\operatorname{mod}\;n}\right) \), since \( n = n \cdot 1 + 0 \) . Therefore, \( a{ + }_{n}\left( {n - a}\right) = \) 0.
|
Yes
|
Theorem 11.5.3 Subgroup Conditions. To determine whether \( H \), a subset of group \( \left\lbrack {G; * }\right\rbrack \), is a subgroup, it is sufficient to prove:\n\n(a) \( H \) is closed under \( * \) ; that is, \( a, b \in H \Rightarrow a * b \in H \) ;\n\n(b) \( H \) contains the identity element for \( * \) ; and\n\n(c) \( H \) contains the inverse of each of its elements; that is, \( a \in H \Rightarrow {a}^{-1} \in H \) .
|
Proof. Our proof consists of verifying that if the three properties above are true, then all the axioms of a group are true for \( \left\lbrack {H; * }\right\rbrack \) . By Condition (a), \( * \) can be considered an operation on \( H \) . The associative, identity, and inverse properties are the axioms that are needed. The identity and inverse properties are true by conditions (b) and (c), respectively, leaving only the associative property. Since, \( \left\lbrack {G; * }\right\rbrack \) is a group, \( a * \left( {b * c}\right) = \left( {a * b}\right) * c \) for all \( a, b, c \in G \) . Certainly, if this equation is true for all choices of three elements from \( G \), it will be true for all choices of three elements from \( H \), since \( H \) is a subset of \( G \) .
|
Yes
|
We can verify that \( 2\mathbb{Z} \leq \mathbb{Z} \), as stated in Example 11.5.2.
|
Whenever you want to discuss a subset, you must find some convenient way of describing its elements. An element of \( 2\mathbb{Z} \) can be described as 2 times an integer; that is, \( a \in 2\mathbb{Z} \) is equivalent to \( {\left( \exists k\right) }_{\mathbb{Z}}\left( {a = {2k}}\right) \) . Now we can verify that the three conditions of Theorem 11.5.3 are true for \( 2\mathbb{Z} \) . First, if \( a, b \in 2\mathbb{Z} \), then there exist \( j, k \in \mathbb{Z} \) such that \( a = {2j} \) and \( b = {2k} \) . A common error is to write something like \( a = {2j} \) and \( b = {2j} \) . This would mean that \( a = b \), which is not necessarily true. That is why two different variables are needed to describe \( a \) and \( b \) . Returning to our proof, we can add \( a \) and \( b : a + b = {2j} + {2k} = 2\left( {j + k}\right) \) . Since \( j + k \) is an integer, \( a + b \) is an element of \( 2\mathbb{Z} \) . Second, the identity,0, belongs to \( 2\mathbb{Z}\left( {0 = 2\left( 0\right) }\right) \) . Finally, if \( a \in 2\mathbb{Z} \) and \( a = {2k}, - a = - \left( {2k}\right) = 2\left( {-k}\right) \) , and \( - k \in \mathbb{Z} \), therefore, \( - a \in 2\mathbb{Z} \) . By Theorem 11.5.3, \( 2\mathbb{Z} \leq \mathbb{Z} \) .
|
Yes
|
Theorem 11.5.5 Condition for a Subgroup of Finite Group. Given that \( \left\lbrack {G; * }\right\rbrack \) is a finite group and \( H \) is a nonempty subset of \( G \), if \( H \) is closed under \( * \), then \( H \) is a subgroup of \( G \) .
|
Proof. In this proof, we demonstrate that Conditions (b) and (c) of Theorem 11.5.3 follow from the closure of \( H \) under \( * \), which is condition (a) of the theorem. First, select any element of \( H \) ; call it \( \beta \) . The powers of \( \beta : {\beta }^{1},{\beta }^{2} \) , \( {\beta }^{3},\ldots \) are all in \( H \) by the closure property. By Theorem 11.3.15, there exists \( m, m \leq \left| G\right| \), such that \( {\beta }^{m} = e \) ; hence \( e \in H \) . To prove that (c) is true, we let \( a \) be any element of \( H \) . If \( a = e \), then \( {a}^{-1} \) is in \( H \) since \( {e}^{-1} = e \) . If \( a \neq e \) , \( {a}^{q} = e \) for some \( q \) between 2 and \( \left| G\right| \) and\n\n\[ e = {a}^{q} = {a}^{q - 1} * a \]\n\nTherefore, \( {a}^{-1} = {a}^{q - 1} \), which belongs to \( H \) since \( q - 1 \geq 1 \) .
|
Yes
|
Example 11.6.2 A Direct Product of Monoids. Consider the monoids \( \mathbb{N} \) (the set of natural numbers with addition) and \( {B}^{ * } \) (the set of finite strings of \( 0 \) ’s and 1’s with concatenation). The direct product of \( \mathbb{N} \) with \( {B}^{ * } \) is a monoid. We illustrate its operation, which we will denote by \( * \), with examples:
|
\[ \left( {4,{001}}\right) * \left( {3,{11}}\right) = \left( {4 + 3,{001} + {11}}\right) = \left( {7,{00111}}\right) \] \[ \left( {0,{11010}}\right) * \left( {3,{01}}\right) = \left( {3,{1101001}}\right) \] \[ \left( {0,\lambda }\right) * \left( {{129},{00011}}\right) = \left( {0 + {129},\lambda + {00011}}\right) = \left( {{129},{00011}}\right) \] \[ \left( {2,{01}}\right) * \left( {8,{10}}\right) = \left( {{10},{0110}}\right) \text{, and} \] \[ \left( {8,{10}}\right) * \left( {2,{01}}\right) = \left( {{10},{1001}}\right) \]
|
Yes
|
Theorem 11.6.5 The Direct Product of Groups is a Group. The direct product of two or more groups is a group; that is, the algebraic properties of a system obtained by taking the direct product of two or more groups includes the group axioms.
|
Proof. We will only present the proof of this theorem for the direct product of two groups. Some slight revisions can be made to produce a proof for any number of factors.\n\nStating that the direct product of two groups is a group is a short way of saying that if \( \\left\\lbrack {{G}_{1};{ * }_{1}}\\right\\rbrack \) and \( \\left\\lbrack {{G}_{2};{ * }_{2}}\\right\\rbrack \) are groups, then \( \\left\\lbrack {{G}_{1} \\times {G}_{2}; * }\\right\\rbrack \) is also a group, where \( * \) is the componentwise operation on \( {G}_{1} \\times {G}_{2} \) . Associativity of \( * \) : If \( a, b, c \\in {G}_{1} \\times {G}_{2}, \)\n\n\[ a * \\left( {b * c}\\right) = \\left( {{a}_{1},{a}_{2}}\\right) * \\left( {\\left( {{b}_{1},{b}_{2}}\\right) * \\left( {{c}_{1},{c}_{2}}\\right) }\\right) \]\n\n\[ = \\left( {{a}_{1},{a}_{2}}\\right) * \\left( {{b}_{1}{ * }_{1}{c}_{1},{b}_{2}{ * }_{2}{c}_{2}}\\right) \]\n\n\[ = \\left( {{a}_{1}{ * }_{1}\\left( {{b}_{1}{ * }_{1}{c}_{1}}\\right) ,{a}_{2}{ * }_{2}\\left( {{b}_{2}{ * }_{2}{c}_{2}}\\right) }\\right) \]\n\n\[ = \\left( {\\left( {{a}_{1}{ * }_{1}{b}_{1}}\\right) { * }_{1}{c}_{1},\\left( {{a}_{2}{ * }_{2}{b}_{2}}\\right) { * }_{2}{c}_{2}}\\right) \]\n\n\[ = \\left( {{a}_{1}{ * }_{1}{b}_{1},{a}_{2}{ * }_{2}{b}_{2}}\\right) * \\left( {{c}_{1},{c}_{2}}\\right) \]\n\n\[ = \\left( {\\left( {{a}_{1},{a}_{2}}\\right) * \\left( {{b}_{1},{b}_{2}}\\right) }\\right) * \\left( {{c}_{1},{c}_{2}}\\right) \]\n\n\[ = \\left( {a * b}\\right) * c \]\n\nNotice how the associativity property hinges on the associativity in each factor. An identity for \( * \) : As you might expect, if \( {e}_{1} \) and \( {e}_{2} \) are identities for \( {G}_{1} \) and\n\n\( {G}_{2} \), respectively, then \( e = \\left( {{e}_{1},{e}_{2}}\\right) \) is the identity for \( {G}_{1} \\times {G}_{2} \) . If \( a \\in {G}_{1} \\times {G}_{2} \) ,\n\n\[ a * e = \\left( {{a}_{1},{a}_{2}}\\right) * \\left( {{e}_{1},{e}_{2}}\\right) \]\n\n\[ = \\left( {{a}_{1}{ * }_{1}{e}_{1},{a}_{2}{ * }_{2}{e}_{2}}\\right) \]\n\n\[ = \\left( {{a}_{1},{a}_{2}}\\right) = a \]\n\nSimilarly, \( e * a = a \) .\n\nInverses in \( {G}_{1} \\times {G}_{2} \) : The inverse of an element is determined componentwise \( {a}^{-1} = {\\left( {a}_{1},{a}_{2}\\right) }^{-1} = \\left( {{a}_{1}{}^{-1},{a}_{2}{}^{-1}}\\right) \) . To verify, we compute \( a * {a}^{-1} \) :\n\n\[ a * {a}^{-1} = \\left( {{a}_{1},{a}_{2}}\\right) * \\left( {{a}_{1}{}^{-1},{a}_{2}{}^{-1}}\\right) \]\n\n\[ = \\left( {{a}_{1}{ * }_{1}{a}_{1}{}^{-1},{a}_{2}{ * }_{2}{a}_{2}{}^{-1}}\\right) \]\n\n\[ = \\left( {{e}_{1},{e}_{2}}\\right) = e \]\n\nSimilarly, \( {a}^{-1} * a = e \) .
|
Yes
|
Example 11.7.2 How to Do Greek Arithmetic. Imagine that you are a six-year-old child who has been reared in an English-speaking family, has moved to Greece, and has been enrolled in a Greek school. Suppose that your new teacher asks the class to do the following addition problem that has been written out in Greek.\n\n\[ \n{\tau \rho }\acute{\iota }\alpha \;{\sigma \upsilon \nu }\;\tau \acute{\epsilon }{\sigma \sigma \varepsilon \rho \alpha }\;{\iota \sigma o}\acute{\upsilon }{\tau \alpha \iota }\;\text{___} \n\]
|
The natural thing for you to do is to take out your Greek-English/English-Greek dictionary and translate the Greek words to English, as outlined in Figure 11.7.3 After you've solved the problem, you can consult the same dictionary to find the proper Greek word that the teacher wants. Although this is not the recommended method of learning a foreign language, it will surely yield the correct answer to the problem. Mathematically, we may say that the system of Greek integers with addition \( \left( {\sigma v\nu }\right) \) is isomorphic to English integers with addition (plus). The problem of translation between natural languages is more difficult than this though, because two complete natural languages are not isomorphic, or at least the isomorphism between them is not contained in a simple dictionary.
|
Yes
|
Software Implementation of Sets. In this example, we will describe how set variables can be implemented on a computer. We will describe the two systems first and then describe the isomorphism between them.
|
System 1: The power set of \( \{ 1,2,3,4,5\} \) with the operation union, \( \cup \) . For simplicity, we will only discuss union. However, the other operations are implemented in a similar way.\n\nSystem 2: Strings of five bits of computer memory with an OR gate. Individual bit values are either zero or one, so the elements of this system can be visualized as sequences of five 0 's and 1's. An OR gate, Figure 11.7.5, is a small piece of computer hardware that accepts two bit values at any one time and outputs either a zero or one, depending on the inputs. The output of an OR gate is one, except when the two bit values that it accepts are both zero, in which case the output is zero. The operation on this system actually consists of sequentially inputting the values of two bit strings into the OR gate. The result will be a new string of five 0 's and 1's. An alternate method of operating in this system is to use five OR gates and to input corresponding pairs of bits from the input strings into the gates concurrently.\n\nThe Isomorphism: Since each system has only one operation, it is clear that union and the OR gate translate into one another. The translation between sets and bit strings is easiest to describe by showing how to construct a set from a bit string. If \( {a}_{1}{a}_{2}{a}_{3}{a}_{4}{a}_{5} \), is a bit string in System 2, the set that it translates to contains the number \( k \) if and only if \( {a}_{k} \) equals 1 . For example, 10001 is translated to the set \( \{ 1,5\} \), while the set \( \{ 1,2\} \) is translated to 11000 . Now imagine that your computer is like the child who knows English and must do a Greek problem. To execute a program that has code that includes the set expression \( \{ 1,2\} \cup \{ 1,5\} \), it will follow the same procedure as the child to obtain the result, as shown in Figure 11.7.6.
|
Yes
|
Example 11.7.7 Multiplying without doing multiplication. This isomorphism is between \( \left\lbrack {{\mathbb{R}}^{ + }; \cdot }\right\rbrack \) and \( \left\lbrack {\mathbb{R}; + }\right\rbrack \) . Until the 1970s, when the price of calculators dropped, multiplication and exponentiation were performed with an isomorphism between these systems. The isomorphism \( \left( {\mathbb{R}}^{ + }\right. \) to \( \left. \mathbb{R}\right) \) between the two groups is that \( \cdot \) is translated into + and any positive real number \( a \) is translated to the logarithm of \( a \) . To translate back from \( \mathbb{R} \) to \( {\mathbb{R}}^{ + } \), you invert the logarithm function. If base ten logarithms are used, an element of \( \mathbb{R}, b \) , will be translated to \( {10}^{b} \) . In pre-calculator days, the translation was done with a table of logarithms or with a slide rule. An example of how the isomorphism is used appears in Figure 11.7.8.
|
## Note 11.7.11\n\n(b) Any application of this definition requires a procedure outlined in Figure 11.7.10. The first condition, that an isomorphism be a bijection, reflects the fact that every true statement in the first group should have exactly one corresponding true statement in the second group. This is exactly why we run into difficulty in translating between two natural languages. To see how Condition (b) of the formal definition is consistent with the informal definition, consider the function \( L : {\mathbb{R}}^{ + } \rightarrow \mathbb{R} \) defined by \( L\left( x\right) = {\log }_{10}x \) . The translation diagram between \( {\mathbb{R}}^{ + } \) and \( \mathbb{R} \) for the multiplication problem \( a \cdot b \) appears in Figure 11.7.12. We arrive at the same result by computing \( {L}^{-1}\left( {L\left( a\right) + L\left( b\right) }\right) \) as we do by computing \( a \cdot b \) . If we apply the function \( L \) to the two results, we get the same image:\n\n\[ L\left( {a \cdot b}\right) = L\left( {{L}^{-1}\left( {L\left( a\right) + L\left( b\right) }\right) }\right) = L\left( a\right) + L\left( b\right) \]\n\n(11.7.1)\n\nsince \( L\left( {{L}^{-1}\left( x\right) }\right) = x \) . Note that (11.7.1) is exactly Condition \( \mathrm{b} \) of the formal definition applied to the two groups \( {\mathbb{R}}^{ + } \) and \( \mathbb{R} \) .
|
Yes
|
Consider \( G = \left\{ {\left. \left( \begin{array}{ll} 1 & a \\ 0 & 1 \end{array}\right) \right| \;a \in \mathbb{R}}\right\} \) with matrix multiplication. The group \( \left\lbrack {\mathbb{R}; + }\right\rbrack \) is isomorphic to \( G \) .
|
Our translation rule is the function \( f : \mathbb{R} \rightarrow G \) defined by \( f\left( a\right) = \left( \begin{array}{ll} 1 & a \\ 0 & 1 \end{array}\right) \) . Since groups have only one operation, there is no need to state explicitly that addition is translated to matrix multiplication. That \( f \) is a bijection is clear from its definition.\n\nIf \( a \) and \( b \) are any real numbers,\n\n\[ f\left( a\right) f\left( b\right) = \left( \begin{array}{ll} 1 & a \\ 0 & 1 \end{array}\right) \left( \begin{array}{ll} 1 & b \\ 0 & 1 \end{array}\right) \]\n\n\[ = \left( \begin{matrix} 1 & a + b \\ 0 & 1 \end{matrix}\right) \]\n\n\[ = f\left( {a + b}\right) \]\n\nWe can apply this translation rule to determine the inverse of a matrix in \( G \) . We know that \( a + \left( {-a}\right) = 0 \) is a true statement in \( \mathbb{R} \) . Using \( f \) to translate this statement, we get\n\n\[ f\left( a\right) f\left( {-a}\right) = f\left( 0\right) \]\n\nor\n\n\[ \left( \begin{array}{ll} 1 & a \\ 0 & 1 \end{array}\right) \left( \begin{matrix} 1 & - a \\ 0 & 1 \end{matrix}\right) = \left( \begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right) \]\n\ntherefore,\n\n\[ {\left( \begin{array}{ll} 1 & a \\ 0 & 1 \end{array}\right) }^{-1} = \left( \begin{matrix} 1 & - a \\ 0 & 1 \end{matrix}\right) \]
|
Yes
|
Theorem 12.1.4 Elementary Operations on Equations. If any sequence of the following operations is performed on a system of equations, the resulting system is equivalent to the original system:\n\n(a) Interchange any two equations in the system.\n\n(b) Multiply both sides of any equation by a nonzero constant.\n\n(c) Multiply both sides of any equation by a nonzero constant and add the result to a second equation in the system, with the sum replacing the latter equation.
|
Let us now use the above theorem to work out the details of Example 12.1.3 and see how we can arrive at the simpler system.\n\nThe original system:\n\n\[ \n4{x}_{1} + 2{x}_{2} + {x}_{3} = 1 \n\]\n\n\[ \n2{x}_{1} + {x}_{2} + {x}_{3} = 4 \n\]\n\n(12.1.1)\n\n\[ \n2{x}_{1} + 2{x}_{2} + {x}_{3} = 3 \n\]\n\nStep 1. We will first change the coefficient of \( {x}_{1} \) in the first equation to one and then use it as a pivot to obtain 0 ’s for the coefficients of \( {x}_{1} \) in Equations 2 and 3 .\n\n- Multiply Equation 1 by \( \frac{1}{4} \) to obtain\n\n\[ \n{x}_{1} + \frac{{x}_{2}}{2} + \frac{{x}_{3}}{4} = \frac{1}{4} \n\]\n\n\[ \n2{x}_{1} + {x}_{2} + {x}_{3} = 4 \n\]\n\n\( \left( {12.1.2}\right) \)\n\n\[ \n2{x}_{1} + 2{x}_{2} + {x}_{3} = 3 \n\]\n\n- Multiply Equation 1 by -2 and add the result to Equation 2 to obtain\n\n\[ \n{x}_{1} + \frac{{x}_{2}}{2} + \frac{{x}_{3}}{4} = \frac{1}{4} \n\]\n\n\[ \n0{x}_{1} + 0{x}_{2} + \frac{{x}_{3}}{2} = \frac{7}{2} \n\]\n\n(12.1.3)\n\n\[ \n2{x}_{1} + 2{x}_{2} + {x}_{3} = 3 \n\]\n\n- Multiply Equation 1 by -2 and add the result to Equation 3 to obtain\n\n\[ \n{x}_{1} + \frac{{x}_{2}}{2} + \frac{{x}_{3}}{4} = \frac{1}{4} \n\]\n\n\[ \n0{x}_{1} + 0{x}_{2} + \frac{{x}_{3}}{2} = \frac{7}{2} \n\]\n\n(12.1.4)\n\n\[ \n0{x}_{1} + {x}_{2} + \frac{{x}_{3}}{2} = \frac{5}{2} \n\]\n\nWe’ve explicitly written terms with zero coefficients such as \( 0{x}_{1} \) to make a point that all variables can be thought of as being involved in all equations. After this example is complete, we will discontinue this practice in favor of the normal practice of making these terms \
|
No
|
Theorem 12.1.5 Elementary Row Operations. If any sequence of the following operations is performed on the augmented matrix of a system of equations, the resulting matrix is a system that is equivalent to the original system. The following operations on a matrix are called elementary row operations:\n\n(1) Exchange any two rows of the matrix.\n\n(2) Multiply any row of the matrix by a nonzero constant.\n\n(3) Multiply any row of the matrix by a nonzero constant and add the result to a second row, with the sum replacing that second row.
|
If we use the notation \( {R}_{i} \) to stand for Row \( i \) of a matrix and \( \rightarrow \) to stand for row equivalence, then\n\n\[ A\overset{c{R}_{i} + {R}_{j}}{ \rightarrow }B \]\n\nmeans that the matrix \( B \) is obtained from the matrix \( A \) by multiplying the Row \( i \) of \( A \) by \( c \) and adding the result to Row \( j \) . The operation of multiplying row \( i \) by \( c \) is indicated by\n\n\[ A\xrightarrow[]{c{R}_{i}}B \]\n\nwhile exchanging rows \( i \) and \( j \) is denoted by\n\n\[ A\overset{{R}_{i} \leftrightarrow {R}_{j}}{ \rightarrow }B. \]\n\nThe matrix notation for the system given in our first example, with the subsequent steps, is:\n\n\[ \left( \begin{matrix} 4 & 2 & 1 & 1 \\ 2 & 1 & 1 & 4 \\ 2 & 2 & 1 & 3 \end{matrix}\right) \;\overset{\frac{1}{4}{R}_{1}}{ \rightarrow }\;\left( \begin{matrix} 1 & \frac{1}{2} & \frac{1}{4} & \frac{1}{4} \\ 2 & 1 & 1 & 4 \\ 2 & 2 & 1 & 3 \end{matrix}\right) \;\overset{-2{R}_{1} + {R}_{2}}{ \rightarrow }\;\left( \begin{matrix} 1 & \frac{1}{2} & \frac{1}{4} & \frac{1}{4} \\ 0 & 0 & \frac{1}{2} & \frac{7}{2} \\ 2 & 2 & 1 & 3 \end{matrix}\right) \]\n\n\[ \overset{-2{R}_{1} + {R}_{3}}{ \rightarrow }\left( \begin{matrix} 1 & \frac{1}{2} & \frac{1}{4} & \frac{1}{4} \\ 0 & 0 & \frac{1}{2} & \frac{7}{2} \\ 0 & 1 & \frac{1}{2} & \frac{5}{2} \end{matrix}\right) \;\overset{{R}_{2} \leftrightarrow {R}_{3}}{ \rightarrow }\left( \begin{matrix} 1 & \frac{1}{2} & \frac{1}{4} & \frac{1}{4} \\ 0 & 1 & \frac{1}{2} & \frac{5}{2} \\ 0 & 0 & \frac{1}{2} & \frac{7}{2} \end{matrix}\right) \]\n\n\[ \overset{-\frac{1}{2}{R}_{2} + {R}_{1}}{ \rightarrow }\;\left( \begin{matrix} 1 & 0 & 0 & - 1 \\ 0 & 1 & \frac{1}{2} & \frac{5}{2} \\ 0 & 0 & \frac{1}{2} & \frac{7}{2} \end{matrix}\right) \;\overset{2{R}_{3}}{ \rightarrow }\;\left( \begin{matrix} 1 & 0 & 0 & - 1 \\ 0 & 1 & \frac{1}{2} & \frac{5}{2} \\ 0 & 0 & 1 & 7 \end{matrix}\right) \]\n\n\[ \overset{-\frac{1}{2}{R}_{3} + {R}_{2}}{ \rightarrow }\left( \begin{matrix} 1 & 0 & 0 & - 1 \\ 0 & 1 & 0 & - 1 \\ 0 & 0 & 1 & 7 \end{matrix}\right) \]\n\nThis again gives us the solution. This procedure is called the Gauss-Jordan elimination method.
|
Yes
|
Find all solutions to the system\n\n\[ \n{x}_{1} + 3{x}_{2} + {x}_{3} = 2 \n\]\n\n\[ \n{x}_{1} + {x}_{2} + 5{x}_{3} = 4 \n\]\n\n\[ \n2{x}_{1} + 2{x}_{2} + {10}{x}_{3} = 6 \n\]
|
The reader can verify that the augmented matrix of this system, \( \left( \begin{array}{llll} 1 & 3 & 1 & 2 \\ 1 & 1 & 5 & 4 \\ 2 & 2 & {10} & 6 \end{array}\right) \) ,\n\nreduces to \( \left( \begin{matrix} 1 & 3 & 1 & 2 \\ 1 & 1 & 5 & 4 \\ 0 & 0 & 0 & - 2 \end{matrix}\right) \) .\n\nWe can attempt to row-reduce this matrix further if we wish. However, any further row-reduction will not substantially change the last row, which, in equation form, is \( 0{x}_{1} + 0{x}_{2} + 0{x}_{3} = - 2 \), or simply \( 0 = - 2 \) . It is clear that we cannot find real numbers \( {x}_{1},{x}_{2} \), and \( {x}_{3} \) that will satisfy this equation. Hence we cannot find real numbers that will satisfy all three original equations simultaneously. When this occurs, we say that the system has no solution, or the solution set is empty.
|
Yes
|
Example 12.1.8 A system with an infinite number of solutions. Next, let's attempt to find all of the solutions to:\n\n\[ \n{x}_{1} + 6{x}_{2} + 2{x}_{3} = 1 \]\n\n\[ \n2{x}_{1} + {x}_{2} + 3{x}_{3} = 2 \]\n\n\[ \n4{x}_{1} + 2{x}_{2} + 6{x}_{3} = 4 \]\n\nThe augmented matrix for the system is\n\n\[ \n\left( \begin{array}{llll} 1 & 6 & 2 & 1 \\ 2 & 1 & 3 & 2 \\ 4 & 2 & 6 & 4 \end{array}\right) \]\n\n(12.1.8)\n\nwhich reduces to\n\n\[ \n\left( \begin{matrix} 1 & 0 & \frac{16}{11} & 1 \\ 0 & 1 & \frac{1}{11} & 0 \\ 0 & 0 & 0 & 0 \end{matrix}\right) \]\n\n(12.1.9)\n\nIf we apply additional elementary row operations to this matrix, it will only become more complicated. In particular, we cannot get a one in the third row, third column. Since the matrix is in simplest form, we will express it in equation format to help us determine the solution set.\n\n\[ \n{x}_{1} + \frac{16}{11}{x}_{3} = 1 \]\n\n\[ \n{x}_{2} + \frac{1}{11}{x}_{3} = 0 \]\n\n(12.1.10)\n\n\[ \n0 = 0 \]\n\nAny real numbers will satisfy the last equation. However, the first equation can be rewritten as \( {x}_{1} = 1 - \frac{16}{11}{x}_{3} \), which describes the coordinate \( {x}_{1} \) in terms of \( {x}_{3} \) . Similarly, the second equation gives \( {x}_{2} \) in terms of \( {x}_{3} \) . A convenient way of listing the solutions of this system is to use set notation. If we call the solution set of the system \( S \), then\n\n\[ \nS = \left\{ {\left( {1 - \frac{16}{11}{x}_{3}, - \frac{1}{11}{x}_{3},{x}_{3}}\right) \mid {x}_{3} \in \mathbb{R}}\right\} .\n\nWhat this means is that if we wanted to list all solutions, we would replace \( {x}_{3} \) by all possible numbers. Clearly, there is an infinite number of solutions, two of which are \( \left( {1,0,0}\right) \) and \( \left( {-{15}, - 1,{11}}\right) \), when \( {x}_{3} \) takes on the values 0 and 11, respectively.
|
The augmented matrix for the system is\n\n\[ \n\left( \begin{array}{llll} 1 & 6 & 2 & 1 \\ 2 & 1 & 3 & 2 \\ 4 & 2 & 6 & 4 \end{array}\right) \]\n\n(12.1.8)\n\nwhich reduces to\n\n\[ \n\left( \begin{matrix} 1 & 0 & \frac{16}{11} & 1 \\ 0 & 1 & \frac{1}{11} & 0 \\ 0 & 0 & 0 & 0 \end{matrix}\right) \]\n\n(12.1.9)\n\nIf we apply additional elementary row operations to this matrix, it will only become more complicated. In particular, we cannot get a one in the third row, third column. Since the matrix is in simplest form, we will express it in equation format to help us determine the solution set.\n\n\[ \n{x}_{1} + \frac{16}{11}{x}_{3} = 1 \]\n\n\[ \n{x}_{2} + \frac{1}{11}{x}_{3} = 0 \]\n\n(12.1.10)\n\n\[ \n0 = 0 \]\n\nAny real numbers will satisfy the last equation. However, the first equation can be rewritten as \( {x}_{1} = 1 - \frac{16}{11}{x}_{3} \), which describes the coordinate \( {x}_{1} \) in terms of \( {x}_{3} \) . Similarly, the second equation gives \( {x}_{2} \) in terms of \( {x}_{3} \) . A convenient way of listing the solutions of this system is to use set notation. If we call the solution set of the system \( S \), then\n\n\[ \nS = \left\{ {\left( {1 - \frac{16}{11}{x}_{3}, - \frac{1}{11}{x}_{3},{x}_{3}}\right) \mid {x}_{3} \in \mathbb{R}}\right\} .\n\nWhat this means is that if we wanted to list all solutions, we would replace \( {x}_{3} \) by all possible numbers. Clearly, there is an infinite number of solutions, two of which are \( \left( {1,0,0}\right) \) and \( \left( {-{15}, - 1,{11}}\right) \), when \( {x}_{3} \) takes on the values 0 and 11, respectively.
|
Yes
|
If we apply The Gauss-Jordan Algorithm to the system\n\n\\[ \n5{x}_{1} + {x}_{2} + 2{x}_{3} + {x}_{4} = 2 \n\\]\n\n\\[ \n3{x}_{1} + {x}_{2} - 2{x}_{3} = 5 \n\\]\n\n\\[ \n{x}_{1} + {x}_{2} + 3{x}_{3} - {x}_{4} = - 1 \n\\]\n\nthe augmented matrix is\n\n\\[ \n\\left( \\begin{matrix} 5 & 1 & 2 & 1 & 2 \\ 3 & 1 & - 2 & 0 & 5 \\ 1 & 1 & 3 & - 1 & - 1 \\end{matrix}\\right) \n\\]
|
is reduced to\n\n\\[ \n\\left( \\begin{matrix} 1 & 0 & 0 & \\frac{1}{2} & \\frac{1}{2} \\ 0 & 1 & 0 & - \\frac{3}{2} & \\frac{3}{2} \\ 0 & 0 & 1 & 0 & - 1 \\end{matrix}\\right) \n\\]\n\nTherefore, \\( {x}_{4} \\) is a free variable in the solution and general solution of the system is\n\n\\[ \nx = \\left( \\begin{array}{l} {x}_{1} \\ {x}_{2} \\ {x}_{3} \\ {x}_{4} \\end{array}\\right) = \\left( \\begin{matrix} \\frac{1}{2} - \\frac{1}{2}{x}_{4} \\ \\frac{3}{2} + \\frac{3}{2}{x}_{4} \\ - 1 \\ {x}_{4} \\end{matrix}\\right) \n\\]\n\nThis conclusion is easy to see if you revert back to the equations that the final value the reduced matrix represents.
|
Yes
|
Example 12.2.2 Recognition of a non-invertible matrix. The reader can verify that if \( A = \left( \begin{matrix} 1 & 2 & 1 \\ - 1 & - 2 & - 1 \\ 0 & 5 & 8 \end{matrix}\right) \) then the augmented matrix \( \left( \begin{matrix} 1 & 2 & 1 & 1 & 0 & 0 \\ - 1 & - 2 & - 2 & 0 & 1 & 0 \\ 0 & 5 & 8 & 0 & 0 & 1 \end{matrix}\right) \) reduces to
|
\[ \left( \begin{array}{llllll} 1 & 2 & 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 1 & 0 \\ 0 & 5 & 8 & 0 & 0 & 1 \end{array}\right) \] (12.2.4) Although this matrix can be row-reduced further, it is not necessary to do so since, in equation form, we have: Table 12.2.3 \[ {x}_{11} + 2{x}_{21} + {x}_{31} = 1\;{x}_{12} + 2{x}_{22} + {x}_{32} = 0\;{x}_{13} + 2{x}_{23} + {x}_{33} = 0 \] \[ 0 = 1\;0 = 1\;0 = 0 \] \[ 5{x}_{21} + 8{x}_{31} = 0\;5{x}_{22} + 8{x}_{32} = 0\;5{x}_{23} + 8{x}_{33} = 1 \] Clearly, there are no solutions to the first two systems, therefore \( {A}^{-1} \) does not exist. From this discussion it should be obvious to the reader that the zero row of the coefficient matrix together with the nonzero entry in the fourth column of that row in matrix (12.2.4) tells us that \( {A}^{-1} \) does not exist.
|
Yes
|
A Vector Space of Matrices. Let \( V = {M}_{2 \times 3}\left( \mathbb{R}\right) \) and let the operations of addition and scalar multiplication be the usual operations of addition and scalar multiplication on matrices. Then \( V \) together with these operations is a real vector space.
|
The reader is strongly encouraged to verify the definition for this example before proceeding further (see Exercise 3 of this section).
|
No
|
The Vector Space \( {\mathbb{R}}^{2} \) . Let \( {\mathbb{R}}^{2} = \left\{ {\left( {{a}_{1},{a}_{2}}\right) \mid {a}_{1},{a}_{2} \in \mathbb{R}}\right\} \) . If we define addition and scalar multiplication the natural way, that is, as we would on \( 1 \times 2 \) matrices, then \( {\mathbb{R}}^{2} \) is a vector space over \( \mathbb{R} \) .
|
See Exercise 12.3.3.4 of this section.
|
No
|
The vector \( \left( {2,3}\right) \) in \( {\mathbb{R}}^{2} \) is a linear combination of the vectors \( \left( {1,0}\right) \) and \( \left( {0,1}\right) \)
|
since \( \left( {2,3}\right) = 2\left( {1,0}\right) + 3\left( {0,1}\right) \)
|
Yes
|
Prove that the vector \( \left( {4,5}\right) \) is a linear combination of the vectors \( \left( {3,1}\right) \) and \( \left( {1,4}\right) \) .
|
By the definition we must show that there exist scalars \( {a}_{1} \) and \( {a}_{2} \) such that:\n\n\[ \begin{aligned} \left( {4,5}\right) & = {a}_{1}\left( {3,1}\right) + {a}_{2}\left( {1,4}\right) \\ & = \left( {3{a}_{1} + {a}_{2},{a}_{1} + 4{a}_{2}}\right) \end{aligned}\; \Rightarrow \;\begin{array}{l} 3{a}_{1} + {a}_{2} = 4 \\ {a}_{1} + 4{a}_{2} = 5 \end{array} \]\n\nThis system has the solution \( {a}_{1} = 1,{a}_{2} = 1 \) .\n\nHence, if we replace \( {a}_{1} \) and \( {a}_{2} \) both by 1, then the two vectors \( \left( {3,1}\right) \) and \( \left( {1,4}\right) \) produce, or generate, the vector \( \left( {4,5}\right) \) .
|
Yes
|
Theorem 12.3.15 The fundamental property of a basis. If \( \left\{ {{\mathbf{x}}_{1},{\mathbf{x}}_{2},\ldots ,{\mathbf{x}}_{n}}\right\} \) is a basis for a vector space \( V \) over \( \mathbb{R} \), then any vector \( y \in V \) can be uniquely expressed as a linear combination of the \( {\mathbf{x}}_{i} \)’s.
|
Proof. Assume that \( \left\{ {{\mathbf{x}}_{1},{\mathbf{x}}_{2},\ldots ,{\mathbf{x}}_{n}}\right\} \) is a basis for \( V \) over \( \mathbb{R} \). We must prove two facts:\n\n(1) each vector \( y \in V \) can be expressed as a linear combination of the \( {\mathbf{x}}_{i} \)’s, and\n\n(2) each such expression is unique.\n\nPart 1 is trivial since a basis, by its definition, must generate all of \( V \).\n\nThe proof of part 2 is a bit more difficult. We follow the standard approach for any uniqueness facts. Let \( y \) be any vector in \( V \) and assume that there are two different ways of expressing \( y \), namely\n\n\[ y = {a}_{1}{\mathbf{x}}_{1} + {a}_{2}{\mathbf{x}}_{2} + \ldots + {a}_{n}{\mathbf{x}}_{n} \]\n\nand\n\n\[ y = {b}_{1}{\mathbf{x}}_{1} + {b}_{2}{\mathbf{x}}_{2} + \ldots + {b}_{n}{\mathbf{x}}_{n} \]\n\nwhere at least one \( {a}_{i} \) is different from the corresponding \( {b}_{i} \). Then equating these two linear combinations we get\n\n\[ {a}_{1}{\mathbf{x}}_{1} + {a}_{2}{\mathbf{x}}_{2} + \ldots + {a}_{n}{\mathbf{x}}_{n} = {b}_{1}{\mathbf{x}}_{1} + {b}_{2}{\mathbf{x}}_{2} + \ldots + {b}_{n}{\mathbf{x}}_{n} \]\n\nso that\n\n\[ \left( {{a}_{1} - {b}_{1}}\right) {\mathbf{x}}_{1} + \left( {{a}_{2} - {b}_{2}}\right) {\mathbf{x}}_{2} + \ldots + \left( {{a}_{n} - {b}_{n}}\right) {\mathbf{x}}_{n} = \mathbf{0} \]\n\nNow a crucial observation: since the \( {\mathbf{x}}_{i}^{\prime }s \) form a linearly independent set, the only solution to the previous equation is that each of the coefficients must equal zero, so \( {a}_{i} - {b}_{i} = 0 \) for \( i = 1,2,\ldots, n \). Hence \( {a}_{i} = {b}_{i} \), for all \( i \). This contradicts our assumption that at least one \( {a}_{i} \) is different from the corresponding \( {b}_{i} \), so each vector \( \mathbf{y} \in V \) can be expressed in one and only one way.
|
Yes
|
Prove that \( \{ \left( {1,1}\right) ,\left( {-1,1}\right) \} \) is a basis for \( {\mathbb{R}}^{2} \) over \( \mathbb{R} \) and explain what this means geometrically.
|
First we show that the vectors \( \left( {1,1}\right) \) and \( \left( {-1,1}\right) \) generate all of \( {\mathbb{R}}^{2} \) . We can do this by imitating Example 12.3.8 and leave it to the reader (see Exercise 12.3.3.10 of this section). Secondly, we must prove that the set is linearly independent.\n\nLet \( {a}_{1} \) and \( {a}_{2} \) be scalars such that \( {a}_{1}\left( {1,1}\right) + {a}_{2}\left( {-1,1}\right) = \left( {0,0}\right) \) . We must prove that the only solution to the equation is that \( {a}_{1} \) and \( {a}_{2} \) must both equal zero. The above equation becomes \( \left( {{a}_{1} - {a}_{2},{a}_{1} + {a}_{2}}\right) = \left( {0,0}\right) \) which gives us the system\n\n\[ \n{a}_{1} - {a}_{2} = 0 \n\]\n\n\[ \n{a}_{1} + {a}_{2} = 0 \n\]\n\nThe augmented matrix of this system reduces in such way that the only solution is the trivial one of all zeros:\n\n\[ \n\left( \begin{matrix} 1 & - 1 & 0 \\ 1 & 1 & 0 \end{matrix}\right) \rightarrow \left( \begin{array}{lll} 1 & 0 & 0 \\ 0 & 1 & 0 \end{array}\right) \Rightarrow {a}_{1} = {a}_{2} = 0 \n\]\n\nTherefore, the set is linearly independent.
|
No
|
We will now diagonalize the matrix \( A \) of Example 12.4.2. We form the matrix \( P \) as follows: Let \( {P}^{\left( 1\right) } \) be the first column of \( P \) . Choose for \( {P}^{\left( 1\right) } \) any eigenvector from \( {E}_{1} \) . We may as well choose a simple vector in \( {E}_{1} \) so \( {P}^{\left( 1\right) } = \left( \begin{matrix} 1 \\ - 1 \end{matrix}\right) \) is our candidate.
|
Similarly, let \( {P}^{\left( 2\right) } \) be the second column of \( P \), and choose for \( {P}^{\left( 2\right) } \) any eigenvector from \( {E}_{2} \) . The vector \( {P}^{\left( 2\right) } = \left( \begin{array}{l} 1 \\ 2 \end{array}\right) \) is a reasonable choice, thus\n\n\[ P = \left( \begin{matrix} 1 & 1 \\ - 1 & 2 \end{matrix}\right) \text{ and }{P}^{-1} = \frac{1}{3}\left( \begin{matrix} 2 & - 1 \\ 1 & 1 \end{matrix}\right) = \left( \begin{matrix} \frac{2}{3} & - \frac{1}{3} \\ \frac{1}{3} & \frac{1}{3} \end{matrix}\right) \]\n\nso that\n\n\[ {P}^{-1}{AP} = \frac{1}{3}\left( \begin{matrix} 2 & - 1 \\ 1 & 1 \end{matrix}\right) \left( \begin{array}{ll} 2 & 1 \\ 2 & 3 \end{array}\right) \left( \begin{matrix} 1 & 1 \\ - 1 & 2 \end{matrix}\right) = \left( \begin{array}{ll} 1 & 0 \\ 0 & 4 \end{array}\right) \]\n\nNotice that the elements on the main diagonal of \( D \) are the eigenvalues of \( A \) , where \( {D}_{ii} \) is the eigenvalue corresponding to the eigenvector \( {P}^{\left( i\right) } \).
|
Yes
|
Theorem 12.4.9 A condition for diagonalizability. Let \( A \) be an \( n \times n \) matrix. Then \( A \) is diagonalizable if and only if \( A \) has \( n \) linearly independent eigenvectors.
|
Proof. Outline of a proof: \( \left( \Leftarrow \right) \) Assume that \( A \) has linearly independent eigenvectors, \( {P}^{\left( 1\right) },{P}^{\left( 2\right) },\ldots ,{P}^{\left( n\right) } \), with corresponding eigenvalues \( {\lambda }_{1},{\lambda }_{2},\ldots \) , \( {\lambda }_{n} \) . We want to prove that \( A \) is diagonalizable. Column \( i \) of the \( n \times n \) matrix \( {AP} \) is \( A{P}^{\left( i\right) } \) (see Exercise 7 of this section). Then, since the \( {P}^{\left( i\right) } \) is an eigenvector of \( A \) associated with the eigenvalue \( {\lambda }_{i} \) we have \( A{P}^{\left( i\right) } = {\lambda }_{i}{P}^{\left( i\right) } \) for \( i = 1,2,\ldots, n \) . But this means that \( {AP} = {PD} \), where \( D \) is the diagonal matrix with diagonal entries \( {\lambda }_{1},{\lambda }_{2},\ldots ,{\lambda }_{n} \) . If we multiply both sides of the equation by \( {P}^{-1} \) we get the desired \( {P}^{-1}{AP} = D \) .\n\n\( \left( \Rightarrow \right) \) The proof in this direction involves a concept that is not covered in this text (rank of a matrix); so we refer the interested reader to virtually any linear algebra text for a proof.
|
No
|
Example 12.4.10 A Matrix that is Not Diagonalizable. Let us attempt to diagonalize the matrix \( A = \left( \begin{matrix} 1 & 0 & 0 \\ 0 & 2 & 1 \\ 1 & - 1 & 4 \end{matrix}\right) \)
|
First, we determine the eigenvalues.\n\n\[ \det \left( {A - {\lambda I}}\right) = \det \left( \begin{matrix} 1 - \lambda & 0 & 0 \\ 0 & 2 - \lambda & 1 \\ 1 & - 1 & 4 - \lambda \end{matrix}\right) \]\n\n\[ = \left( {1 - \lambda }\right) \det \left( \begin{matrix} 2 - \lambda & 1 \\ - 1 & 4 - \lambda \end{matrix}\right) \]\n\n\[ = \left( {1 - \lambda }\right) \left( {\left( {2 - \lambda }\right) \left( {4 - \lambda }\right) + 1}\right) \]\n\n\[ = \left( {1 - \lambda }\right) \left( {{\lambda }^{2} - {6\lambda } + 9}\right) \]\n\n\[ = \left( {1 - \lambda }\right) {\left( \lambda - 3\right) }^{2} \]\n\nTherefore there are two eigenvalues, \( {\lambda }_{1} = 1 \) and \( {\lambda }_{2} = 3 \) . Since \( {\lambda }_{1} \) is an eigenvalue of degree one, it will have an eigenspace of dimension 1 . Since \( {\lambda }_{2} \) is a double root of the characteristic equation, the dimension of its eigenspace must be 2 in order to be able to diagonalize.\n\nCase 1. For \( {\lambda }_{1} = 1 \), the equation \( \left( {A - {\lambda I}}\right) \mathbf{x} = \mathbf{0} \) becomes\n\n\[ \left( \begin{matrix} 0 & 0 & 0 \\ 0 & 1 & 1 \\ 1 & - 1 & 3 \end{matrix}\right) \left( \begin{array}{l} {x}_{1} \\ {x}_{2} \\ {x}_{3} \end{array}\right) = \left( \begin{array}{l} 0 \\ 0 \\ 0 \end{array}\right) \]\n\nRow reduction of this system reveals one free variable and eigenspace\n\n\[ \left( \begin{matrix} {x}_{1} \\ {x}_{2} \\ {x}_{3} \end{matrix}\right) = \left( \begin{matrix} - 4{x}_{3} \\ - {x}_{3} \\ {x}_{3} \end{matrix}\right) = {x}_{3}\left( \begin{matrix} - 4 \\ - 1 \\ 1 \end{matrix}\right) \]\n\nHence, \( \left\{ \left( \begin{matrix} - 4 \\ - 1 \\ 1 \end{matrix}\right) \right\} \) is a basis for the eigenspace of \( {\lambda }_{1} = 1 \) .\n\nCase 2. For \( {\lambda }_{2} = 3 \), the equation \( \left( {A - {\lambda I}}\right) \mathbf{x} = \mathbf{0} \) becomes\n\n\[ \left( \begin{matrix} - 2 & 0 & 0 \\ 0 & - 1 & 1 \\ 1 & - 1 & 1 \end{matrix}\right) \left( \begin{array}{l} {x}_{1} \\ {x}_{2} \\ {x}_{3} \end{array}\right) = \left( \begin{array}{l} 0 \\ 0 \\ 0 \end{array}\right) \]\n\nOnce again there is only one free variable in the row reduction and so the dimension of the eigenspace will be one:\n\n\[ \left( \begin{array}{l} {x}_{1} \\ {x}_{2} \\ {x}_{3} \end{array}\right) = \left( \begin{matrix} 0 \\ {x}_{3} \\ {x}_{3} \end{matrix}\right) = {x}_{3}\left( \begin{array}{l} 0 \\ 1 \\ 1 \end{array}\right) \]\n\nHence, \( \left\{ \left( \begin{array}{l} 0 \\ 1 \\ 1 \end{array}\right) \right\} \) is a basis for the eigenspace of \( {\lambda }_{2} = 3 \) . This means that \( {\lambda }_{2} = \) 3 produces only one column for \( P \) . Since we began with only two eigenvalues, we had hoped that \( {\lambda }_{2} = 3 \) would produce a vector space of dimension two, or, in matrix terms, two linearly independent columns for \( P \) . Since \( A \) does not have three linearly independent eigenvectors \( A \) cannot be diagonalized.
|
Yes
|
Consider the computation of terms of the Fibonnaci Sequence. Recall that \( {F}_{0} = 1,{F}_{1} = 1 \) and \( {F}_{k} = {F}_{k - 1} + {F}_{k - 2} \) for \( k \geq 2 \).
|
In order to formulate the calculation in matrix form, we introduced the \
|
No
|
How do we compute \( {A}^{k} \) for possibly large values of \( k \) ?
|
From the discussion at the beginning of this section, we know that \( {A}^{k} = P{D}^{k}{P}^{-1} \) if \( A \) is diagonalizable. We leave to the reader to show that \( \lambda = 1,2 \), and -1 are eigenvalues of \( A \) with eigenvectors\n\n\[ \left( \begin{matrix} 1 \\ 0 \\ - 1 \end{matrix}\right) ,\left( \begin{array}{l} 1 \\ 1 \\ 1 \end{array}\right) ,\text{ and }\left( \begin{matrix} 1 \\ - 2 \\ 1 \end{matrix}\right) \]\n\nThen\n\n\[ {A}^{k} = P\left( \begin{matrix} 1 & 0 & 0 \\ 0 & {2}^{k} & 0 \\ 0 & 0 & {\left( -1\right) }^{k} \end{matrix}\right) {P}^{-1} \]\n\n\[ \text{where}P = \left( \begin{matrix} 1 & 1 & 1 \\ 0 & 1 & - 2 \\ - 1 & 1 & 1 \end{matrix}\right) \text{and}{P}^{-1} = \left( \begin{matrix} \frac{1}{2} & 0 & - \frac{1}{2} \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \\ \frac{1}{6} & - \frac{1}{3} & \frac{1}{6} \end{matrix}\right) \]
|
No
|
Given a polynomial \( f\left( x\right) \), we defined the matrix-polynomial \( f\left( A\right) \) for square matrices in Chapter 5. Hence, we are in a position to describe \( {e}^{A} \) for an \( n \times n \) matrix \( A \) as a limit of polynomials, the partial sums of the series. Formally, we write\n\n\[ \n{e}^{A} = I + A + \frac{{A}^{2}}{2!} + \frac{{A}^{3}}{3!} + \cdots = \mathop{\sum }\limits_{{k = 0}}^{\infty }\frac{{A}^{k}}{k!} \n\]
|
Again we encounter the need to compute high powers of a matrix. Let \( A \) be an \( n \times n \) diagonalizable matrix. Then there exists an invertible \( n \times n \) matrix \( P \) such that \( {P}^{-1}{AP} = D \), a diagonal matrix, so that\n\n\[ \n{e}^{A} = {e}^{{PD}{P}^{-1}} \n\]\n\n\[ \n= \mathop{\sum }\limits_{{k = 0}}^{\infty }\frac{{\left( PD{P}^{-1}\right) }^{k}}{k!} \n\]\n\n\[ \n= P\left( {\mathop{\sum }\limits_{{k = 0}}^{\infty }\frac{{D}^{k}}{k!}}\right) {P}^{-1} \n\]\n\nThe infinite sum in the middle of this final expression can be easily evaluated if \( D \) is diagonal. All entries of powers off the diagonal are zero and the \( {i}^{\text{th }} \) entry of the diagonal is\n\n\[ \n{\left( \mathop{\sum }\limits_{{k = 0}}^{\infty }\frac{{D}^{k}}{k!}\right) }_{ii} = \mathop{\sum }\limits_{{k = 0}}^{\infty }\frac{{D}_{ii}{}^{k}}{k!} = {e}^{{D}_{ii}} \n\]\n\nFor example, if \( A = \left( \begin{array}{ll} 2 & 1 \\ 2 & 3 \end{array}\right) \), the first matrix we diagonalized in Section\n\n12.3, we found that \( P = \left( \begin{matrix} 1 & 1 \\ - 1 & 2 \end{matrix}\right) \) and \( D = \left( \begin{array}{ll} 1 & 0 \\ 0 & 4 \end{array}\right) \). \n\nTherefore,\n\n\[ \n{e}^{A} = \left( \begin{matrix} 1 & 1 \\ - 1 & 2 \end{matrix}\right) \left( \begin{matrix} e & 0 \\ 0 & {e}^{4} \end{matrix}\right) \left( \begin{matrix} \frac{2}{3} & - \frac{1}{3} \\ \frac{1}{3} & \frac{1}{3} \end{matrix}\right) \n\]\n\n\[ \n= \left( \begin{matrix} \frac{2e}{3} + \frac{{e}^{4}}{3} & - \frac{e}{3} + \frac{{e}^{4}}{3} \\ - \frac{2e}{3} + \frac{2{e}^{4}}{3} & \frac{e}{3} + \frac{2{e}^{4}}{3} \end{matrix}\right) \n\]\n\n\[ \n\approx \left( \begin{array}{ll} {20.0116} & {17.2933} \\ {34.5866} & {37.3049} \end{array}\right) \n\]
|
Yes
|
Theorem 13.1.6 Uniqueness of Least Upper and Greatest Lower Bounds. Let \( \left( {L, \preccurlyeq }\right) \) be a poset, and \( a, b \in L \) . If a greatest lower bound of a and \( b \) exists, then it is unique. The same is true of a least upper bound, if it exists.
|
Proof. Let \( \ell \) and \( {\ell }^{\prime } \) be greatest lower bounds of \( a \) and \( b \) . We will prove that \( \ell = {\ell }^{\prime } \) .\n\n(1) \( \ell \) a greatest lower bound of \( a \) and \( b \Rightarrow \ell \) is a lower bound of \( a \) and \( b \) .\n\n(2) \( {\ell }^{\prime } \) a greatest lower bound of \( a \) and \( b \) and \( \ell \) a lower bound of \( a \) and \( b \) \( \Rightarrow \ell \preccurlyeq {\ell }^{\prime } \), by the definition of greatest lower bound.\n\n(3) \( {\ell }^{\prime } \) a greatest lower bound of \( a \) and \( b \Rightarrow {\ell }^{\prime } \) is a lower bound of \( a \) and \( b \) .\n\n(4) \( \ell \) a greatest lower bound of \( a \) and \( b \) and \( {\ell }^{\prime } \) a lower bound of \( a \) and \( b \) . \( \Rightarrow {\ell }^{\prime } \preccurlyeq \ell \) by the definition of greatest lower bound.\n\n(5) \( \ell \preccurlyeq {\ell }^{\prime } \) and \( {\ell }^{\prime } \preccurlyeq \ell \Rightarrow \ell = {\ell }^{\prime } \) by the antisymmetry property of a partial ordering.
|
Yes
|
The power set of a three element set. Consider the poset \( \left( {\mathcal{P}\left( A\right) , \subseteq }\right) \), where \( A = \{ 1,2,3\} \) . The greatest lower bound of \( \{ 1,2\} \) and \( \{ 1,3\} \) is \( \ell = \{ 1\} \) . For any other element \( {\ell }^{\prime } \) which is a subset of \( \{ a, b\} \) and \( \{ a, c\} \) (there is only one; what is it?), \( {\ell }^{\prime } \subseteq \ell \) .
|
The least element of \( \mathcal{P}\left( A\right) \) is \( \varnothing \) and the greatest element is \( A = \{ a, b, c\} \) . The Hasse diagram of this poset is shown in Figure 13.1.11.
|
No
|
The power set of a three element set. Consider the poset \( \left( {\mathcal{P}\left( A\right) , \subseteq }\right) \) we examined in Example 13.1.10. It isn’t too surprising that every pair of sets had a greatest lower bound and least upper bound. Thus, we have a lattice in this case; and \( A \vee B = A \cup B \) and \( A \land B = A \cap B \) .
|
The reader is encouraged to write out the operation tables \( \left\lbrack {\mathcal{P}\left( A\right) ;\cup , \cap }\right\rbrack \) .
|
No
|
Example 13.2.5 A Nondistributive Lattice. We now give an example of a lattice where the distributive laws do not hold. Let \( L = \{ \mathbf{0}, a, b, c,\mathbf{1}\} \) . We define the partial ordering \( \preccurlyeq \) on \( L \) by the set\n\n\[ \n\{ \left( {\mathbf{0},\mathbf{0}}\right) ,\left( {\mathbf{0}, a}\right) ,\left( {\mathbf{0}, b}\right) ,\left( {\mathbf{0}, c}\right) ,\left( {\mathbf{0},\mathbf{1}}\right) ,\left( {a, a}\right) ,\left( {a,\mathbf{1}}\right) ,\left( {b, b}\right) ,\left( {b,\mathbf{1}}\right) ,\left( {c, c}\right) ,\left( {c,\mathbf{1}}\right) ,\left( {\mathbf{1},\mathbf{1}}\right) \} \n\] \n\nThe operation tables for \( \vee \) and \( \land \) on \( L \) are:\n\n\n\nSince every pair of elements in \( L \) has both a join and a meet, \( \left\lbrack {L;\vee , \land }\right\rbrack \) is a lattice (under divides). Is this lattice distributive?
|
We note that: \( a \vee \left( {c \land b}\right) = \) \( a \vee \mathbf{0} = a \) and \( \left( {a \vee c}\right) \land \left( {a \vee b}\right) = \mathbf{1} \land \mathbf{1} = \mathbf{1} \) . Therefore, \( a \vee \left( {b \land c}\right) \neq \left( {a \vee b}\right) \land \left( {a \vee c}\right) \) for some values of \( a, b, c \in L \) . Thus, this lattice is not distributive.
|
Yes
|
Set Complement is a Complement. In Chapter 1, we defined the complement of a subset of any universe. This turns out to be a concrete example of the general concept we have just defined, but we will reason through why this is the case here. Let \( L = \mathcal{P}\left( A\right) \), where \( A = \{ a, b, c\} \). Then \( \left\lbrack {L;\cup , \cap }\right\rbrack \) is a bounded lattice with \( 0 = \varnothing \) and \( 1 = A \). To find the complement, if it exists, of \( B = \{ a, b\} \in L \), for example, we want \( D \) such that\n\n\[ \{ a, b\} \cap D = \varnothing \]\n\n\[ \text{and} \]\n\n\[ \{ a, b\} \cup D = A \]
|
It’s not too difficult to see that \( D = \{ c\} \), since we need to include \( c \) to make the first condition true and can’t include \( a \) or \( b \) if the second condition is to be true. Of course this is precisely how we defined \( {A}^{c} \) in Chapter 1. Since it can be shown that each element of \( L \) has a complement (see Exercise 1), \( \left\lbrack {L;\cup , \cap }\right\rbrack \) is a complemented lattice. Note that if \( A \) is any set and \( L = \mathcal{P}\left( A\right) \), then \( \left\lbrack {L;\cup , \cap }\right\rbrack \) is a complemented lattice where the complement of \( B \in L \) is \( {B}^{c} = A - B \) .
|
No
|
Theorem 13.3.7 One condition for unique complements. If \( \left\lbrack {L;\vee , \land }\right\rbrack \) is a complemented, distributive lattice, then the complement of each element \( a \in L \) is unique.
|
Proof. Let \( a \in L \) and assume to the contrary that \( a \) has two complements, namely \( {a}_{1} \) and \( {a}_{2} \) . Then by the definition of complement,\n\n\[ \begin{matrix} a \land {a}_{1} = 0\text{ and }a \vee {a}_{1} = 1, \\ \text{ and } \end{matrix} \]\n\n\[ a \land {a}_{2} = 0\text{and}a \vee {a}_{2} = 1\text{,} \]\n\nThen\n\n\[ {a}_{1} = {a}_{1} \land \mathbf{1} = {a}_{1} \land \left( {a \vee {a}_{2}}\right) \]\n\n\[ = \left( {{a}_{1} \land a}\right) \vee \left( {{a}_{1} \land {a}_{2}}\right) \]\n\n\[ = \mathbf{0} \vee \left( {{a}_{1} \land {a}_{2}}\right) \]\n\n\[ = {a}_{1} \land {a}_{2} \]\n\nOn the other hand,\n\n\[ {a}_{2} = {a}_{2} \land \mathbf{1} = {a}_{2} \land \left( {a \vee {a}_{1}}\right) \]\n\n\[ = \left( {{a}_{2} \land a}\right) \vee \left( {{a}_{2} \land {a}_{1}}\right) \]\n\n\[ = \mathbf{0} \vee \left( {{a}_{2} \land {a}_{1}}\right) \]\n\n\[ = {a}_{2} \land {a}_{1} \]\n\n\[ = {a}_{1} \land {a}_{2} \]\n\nHence \( {a}_{1} = {a}_{2} \), which contradicts the assumption that \( a \) has two different complements.
|
Yes
|
Theorem 13.4.6 Let \( \mathcal{B} = \left\lbrack {B;\vee ,\land , - }\right\rbrack \) be any finite Boolean algebra, and let \( A \) be the set of all atoms of \( \mathcal{B} \) . Then \( \left\lbrack {\mathcal{P}\left( A\right) ;\cup ,\cap ,{}^{c}}\right\rbrack \) is isomorphic to \( \left\lbrack {B;\vee ,\land , - }\right\rbrack \)
|
Proof. An isomorphism that serves to prove this theorem is \( T : \mathcal{P}\left( A\right) \rightarrow B \) defined by \( T\left( S\right) = \mathop{\bigvee }\limits_{{a \in S}}a \), where \( T\left( \varnothing \right) \) is interpreted as the zero of \( \mathcal{B} \) . We leave it to the reader to prove that this is indeed an isomorphism.
|
No
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.