Q
stringlengths 4
3.96k
| A
stringlengths 1
3k
| Result
stringclasses 4
values |
|---|---|---|
Example 2.7.7. Since, for all \( x > 0 \)
|
\[ \frac{d}{dx}\log \left( x\right) = \frac{1}{x} > 0 \] and \[ \frac{{d}^{2}}{d{x}^{2}}\log \left( x\right) = - \frac{1}{{x}^{2}} < 0 \] the function \( y = \log \left( x\right) \) is increasing on \( \left( {0,\infty }\right) \) and its graph is concave downward on \( \left( {0,\infty }\right) \) . See Figure 2.7.2.
|
Yes
|
Example 2.7.9. We may now complete the example, discussed in Example 2.5.8 and continued in Example 2.6.19, of finding the length \( L \) of the graph of \( y = {x}^{2} \) over the interval \( \left\lbrack {0,1}\right\rbrack \) .
|
In those examples we found that\n\n\[ L = {\int }_{0}^{1}\sqrt{1 + 4{x}^{2}}{dx} = \frac{\sqrt{5}}{2} + \frac{1}{4}{\int }_{1}^{2 + \sqrt{5}}\frac{1}{w}{dw}. \]\n\nNow we see that\n\n\[ {\int }_{1}^{2 + \sqrt{5}}\frac{1}{w}{dw} = {\left. \log \left( w\right) \right| }_{1}^{2 + \sqrt{5}} = \log \left( {2 + \sqrt{5}}\right) \]\n\nand so\n\n\[ L = \frac{\sqrt{5}}{2} + \frac{1}{4}\log \left( {2 + \sqrt{5}}\right) \]
|
Yes
|
To evaluate\n\n\[ \n{\int }_{0}^{1}\frac{x}{1 + {x}^{2}}{dx} \n\]
|
we first make the change of variable\n\n\[ \nu = 1 + {x}^{2} \]\n\n\[ {du} = {2xdx}. \]\n\nThen\n\n\[ \n{\int }_{0}^{1}\frac{x}{1 + {x}^{2}}{dx} = {\left. \frac{1}{2}{\int }_{1}^{2}\frac{1}{u}du = \frac{1}{2}\log \left( u\right) \right| }_{1}^{2} = \frac{\log \left( 2\right) }{2}. \n\]
|
Yes
|
Evaluating\n\n\[ \n{\int }_{1}^{10}\log \left( x\right) {dx} \n\]
|
provides an interesting application of integration by parts. If we let\n\n\[ \nu = \log \left( x\right) \]\n\n\[ {dv} = {dx} \]\n\n\[ {du} = \frac{1}{x}{dx} \]\n\n\[ v = x, \]\n\nthen\n\n\[ \n{\int }_{1}^{10}\log \left( x\right) {dx} = {\left. x\log \left( x\right) \right| }_{1}^{10} - {\int }_{1}^{10}{dx} = {10}\log \left( {10}\right) - 9. \n\]
|
Yes
|
Example 2.7.12. We could use the change of variable \( u = {3x} + 2 \) to evaluate\n\n\[ \n{\int }_{0}^{5}\frac{4}{{3x} + 2}{dx} \n\]
|
\[ \n{\int }_{0}^{5}\frac{4}{{3x} + 2}{dx} = {\left. \frac{4}{3}\log \left( 3x + 2\right) \right| }_{0}^{5} = \frac{4}{3}\left( {\log \left( {17}\right) - \log \left( 2\right) }\right) = \frac{4}{3}\log \left( \frac{17}{2}\right) . \n\]
|
Yes
|
Example 2.7.14. Carbon-14 is a naturally occurring radioactive isotope of carbon with a half-life of 5730 years. A living organism will maintain constant levels of carbon-14, which will begin to decay once the organism dies and is buried. Because of this, the amount of carbon-14 in the remains of an organism may be used to estimate its age. For example, suppose a piece of wood found buried at an archaeological site has \( {14}\% \) of its original carbon-14. If \( T \) is the number of years since the wood was buried and \( {y}_{0} \) is the original amount of carbon-14 in the wood, then\n\n\[ \n{0.14}{y}_{0} = {y}_{0}{e}^{kT} \n\]\n\nwhere, from (2.7.51),\n\n\[ \nk = - \frac{\log \left( 2\right) }{5730} \n\]\n\nIt follows that \( {0.14} = {e}^{kT} \), and so\n\n\[ \n{kT} = \log \left( {0.14}\right) \text{.} \n\]\n\nThus\n\[ \nT = \frac{\log \left( {0.14}\right) }{k} = - \frac{{5730}\log \left( {0.14}\right) }{\log \left( 2\right) } \approx {16},{253}\text{ years. } \n\]
|
\[ \n{0.14}{y}_{0} = {y}_{0}{e}^{kT} \n\]\n\nwhere, from (2.7.51),\n\n\[ \nk = - \frac{\log \left( 2\right) }{5730} \n\]\n\nIt follows that \( {0.14} = {e}^{kT} \), and so\n\n\[ \n{kT} = \log \left( {0.14}\right) \text{.} \n\]\n\nThus\n\[ \nT = \frac{\log \left( {0.14}\right) }{k} = - \frac{{5730}\log \left( {0.14}\right) }{\log \left( 2\right) } \approx {16},{253}\text{ years. } \n\]
|
Yes
|
Example 2.7.15. Suppose that a certain habitat initially holds a population of 1000 deer, and that five years later the population has grown to 1200 deer. If we let \( y \) be the size of the population after \( t \) years, and assuming no constraints on the growth of the population, we would have\n\n\[ y = {1000}{e}^{kt}, \]\n\nwhere\n\n\[ k = \frac{1}{5}\log \left( \frac{1200}{1000}\right) = \frac{\log \left( {1.2}\right) }{5}. \]\n\nThen, for example, this model would predict a population of\n\n\[ y\left( {10}\right) = {1000}{e}^{10k} \]\n\n\[ = {1000}{e}^{2\log \left( {1.2}\right) } \]\n\n\[ = {1000}{e}^{\log \left( {1.2}^{2}\right) } \]\n\n\[ = \left( {1000}\right) {\left( {1.2}\right) }^{2} \]\n\n\[ = {1440}\text{deer} \]
|
after five more years.
|
Yes
|
Example 2.7.16. In our previous example, where \( y \) represented the number of deer in a certain habitat after \( t \) years, we found\n\n\[ k = \frac{\log \left( {1.2}\right) }{5}. \]\n\nNow suppose the habitat can support no more than 20,000 deer. Then the logistic model would give us\n\n\[ z = \frac{\left( {1000}\right) \left( {20000}\right) }{{1000} + \left( {{20000} - {1000}}\right) {e}^{kt}} = \frac{20000}{1 + {19}{e}^{-{kt}}} \]\n\nfor the number of deer after \( t \) years. After ten years, this model would predict a population of\n\n\[ z\left( {10}\right) = \frac{20000}{1 + {19}{e}^{-{10k}}} \approx {1409}\text{ deer,}\]
|
After ten years, this model would predict a population of\n\n\[ z\left( {10}\right) = \frac{20000}{1 + {19}{e}^{-{10k}}} \approx {1409}\text{ deer,}\]
|
Yes
|
To evaluate\n\n\\[ \n{\\int }_{0}^{1}\\frac{x}{{x}^{2} - 4}{dx} \n\\]\n\nwe first note that, since \\( {x}^{2} - 4 = \\left( {x - 2}\\right) \\left( {x + 2}\\right) \\), there exist constants \\( A \\) and \\( B \\)\n\nfor which\n\\[ \n\\frac{x}{{x}^{2} - 4} = \\frac{A}{x - 2} + \\frac{B}{x + 2}. \n\\]
|
It follows that\n\\[ \n\\frac{x}{{x}^{2} - 4} = \\frac{A\\left( {x + 2}\\right) + B\\left( {x - 2}\\right) }{{x}^{2} - 4}, \n\\]\n\nand so\n\n\\[ \nx = A\\left( {x + 2}\\right) + B\\left( {x - 2}\\right) \n\\]\n\nfor all values of \\( x \\) . In particular, when \\( x = 2 \\) we have \\( 2 = {4A} \\), and when \\( x = - 2 \\) we have \\( - 2 = - {4B} \\) . Hence \\( A = \\frac{1}{2} \\) and \\( B = \\frac{1}{2} \\), and so\n\n\\[ \n\\frac{x}{{x}^{2} - 4} = \\frac{1}{2}\\frac{1}{x - 2} + \\frac{1}{2}\\frac{1}{x + 2}. \n\\]\n\nAnd so we have\n\n\\[ \n{\\int }_{0}^{1}\\frac{x}{{x}^{2} - 4}{dx} = \\frac{1}{2}{\\int }_{0}^{1}\\frac{1}{x - 2}{dx} + \\frac{1}{2}{\\int }_{0}^{1}\\frac{1}{x + 2}{dx}. \n\\]\n\nNow\n\\[ \n\\frac{1}{2}{\\int }_{0}^{1}\\frac{1}{x + 2}{dx} = {\\left. \\frac{1}{2}\\log \\left( x + 2\\right) \\right| }_{0}^{1} = \\frac{1}{2}\\left( {\\log \\left( 3\\right) - \\log \\left( 2\\right) }\\right) , \n\\]\n\nbut the first integral requires a bit more care because \\( x - 2 < 0 \\) for \\( 0 \\leq x \\leq 1 \\) . If we make the change of variables,\n\n\\[ \nu = - \\left( {x - 2}\\right) \n\\]\n\n\\[ {du} = - {dx} \n\\]\n\nthen, since \\( x - 2 = - u \\) ,\n\n\\[ \n\\frac{1}{2}{\\int }_{0}^{1}\\frac{1}{x - 2}{dx} = \\frac{1}{2}{\\int }_{2}^{1}\\frac{1}{u}{du} \n\\]\n\n\\[ \n= - \\frac{1}{2}{\\int }_{1}^{2}\\frac{1}{u}{du} \n\\]\n\n\\[ \n= - {\\left. \\frac{1}{2}\\log \\left( u\\right) \\right| }_{1}^{2} \n\\]\n\n\\[ \n= - \\frac{1}{2}\\log \\left( 2\\right) \n\\]\n\nHence\n\n\\[ \n{\\int }_{0}^{1}\\frac{x}{{x}^{2} - 1}{dx} = \\frac{1}{2}\\left( {\\log \\left( 3\\right) - \\log \\left( 2\\right) }\\right) - \\frac{1}{2}\\log \\left( 2\\right) = \\frac{1}{2}\\log \\left( 3\\right) - \\log \\left( 2\\right) . \n\\]
|
Yes
|
A number is called perfect iff it is equal to the sum of its proper divisors (i.e., numbers that evenly divide it but aren't identical to the number). For instance, 6 is perfect because its proper divisors are 1,2, and 3, and 6 = \( 1 + 2 + 3 \) . In fact,6 is the only positive integer less than 10 that is perfect. So, using extensionality, we can say:
|
\[ \{ 6\} = \{ x : x\text{ is perfect and }0 \leq x \leq {10}\} \] We read the notation on the right as \
|
Yes
|
Every set is a subset of itself, and \( \varnothing \) is a subset of every set.
|
The set of even numbers is a subset of the set of natural numbers. Also, \( \{ a, b\} \subseteq \) \( \{ a, b, c\} \) . But \( \{ a, b, e\} \) is not a subset of \( \{ a, b, c\} \) .
|
No
|
Extensionality gives a criterion of identity for sets: \( A = B \) iff every element of \( A \) is also an element of \( B \) and vice versa.
|
The definition of \
|
No
|
What are all the possible subsets of \( \{ a, b, c\} \) ?
|
They are: \( \varnothing \) , \( \{ a\} ,\{ b\} ,\{ c\} ,\{ a, b\} ,\{ a, c\} ,\{ b, c\} ,\{ a, b, c\} \) . The set of all these subsets is \( \wp \left( {\{ a, b, c\} }\right) \) :\n\n\[ \wp \left( {\{ a, b, c\} }\right) = \{ \varnothing ,\{ a\} ,\{ b\} ,\{ c\} ,\{ a, b\} ,\{ b, c\} ,\{ a, c\} ,\{ a, b, c\} \}
|
Yes
|
Example 1.16. Since the multiplicity of elements doesn't matter, the union of two sets which have an element in common contains that element only once, e.g., \( \{ a, b, c\} \cup \{ a,0,1\} = \{ a, b, c,0,1\} \) .
|
The union of a set and one of its subsets is just the bigger set: \( \{ a, b, c\} \cup \) \( \{ a\} = \{ a, b, c\} . \n\nThe union of a set with the empty set is identical to the set: \( \{ a, b, c\} \cup \varnothing = \) \( \{ a, b, c\} \) .
|
No
|
Example 1.25. If \( A = \{ 0,1\} \), and \( B = \{ 1, a, b\} \), then their product is
|
\[ A \times B = \{ \langle 0,1\rangle ,\langle 0, a\rangle ,\langle 0, b\rangle ,\langle 1,1\rangle ,\langle 1, a\rangle ,\langle 1, b\rangle \} . \]
|
Yes
|
If \( A \) is a set, the product of \( A \) with itself, \( A \times A \), is also written \( {A}^{2} \). It is the set of all pairs \( \langle x, y\rangle \) with \( x, y \in A \). The set of all triples \( \langle x, y, z\rangle \) is \( {A}^{3} \), and so on. We can give a recursive definition:
|
\[ {A}^{1} = A \] \[ {A}^{k + 1} = {A}^{k} \times A \]
|
Yes
|
Proposition 1.27. If \( A \) has \( n \) elements and \( B \) has \( m \) elements, then \( A \times B \) has \( n \cdot m \) elements.
|
Proof. For every element \( x \) in \( A \), there are \( m \) elements of the form \( \langle x, y\rangle \in \) \( A \times B \) . Let \( {B}_{x} = \{ \langle x, y\rangle : y \in B\} \) . Since whenever \( {x}_{1} \neq {x}_{2},\left\langle {{x}_{1}, y}\right\rangle \neq \left\langle {{x}_{2}, y}\right\rangle \) , \( {B}_{{x}_{1}} \cap {B}_{{x}_{2}} = \varnothing \) . But if \( A = \left\{ {{x}_{1},\ldots ,{x}_{n}}\right\} \), then \( A \times B = {B}_{{x}_{1}} \cup \cdots \cup {B}_{{x}_{n}} \), and so has \( n \cdot m \) elements.
|
Yes
|
Theorem 1.29 (Russell’s Paradox). There is no set \( R = \{ x : x \notin x\} \).
|
Proof. For reductio, suppose that \( R = \{ x : x \notin x\} \) exists. Then \( R \in R \) iff \( R \notin R \), since sets are extensional. But this is a contradicion.
|
Yes
|
The set \( {\mathbb{N}}^{2} \) of pairs of natural numbers can be listed in a 2-dimensional matrix like this:
|
\[ \langle \mathbf{0},\mathbf{0}\rangle \;\langle 0,1\rangle \;\langle 0,2\rangle \;\langle 0,3\rangle \;\ldots \] \[ \langle 1,0\rangle \;\langle \mathbf{1},\mathbf{1}\rangle \;\langle 1,2\rangle \;\langle 1,3\rangle \;\ldots \] \[ \langle 2,0\rangle \;\langle 2,1\rangle \;\langle \mathbf{2},\mathbf{2}\rangle \;\langle 2,3\rangle \;\ldots \] \[ \langle 3,0\rangle \;\langle 3,1\rangle \;\langle 3,2\rangle \;\langle \mathbf{3},\mathbf{3}\rangle \;\ldots \] \[ \text{…………} \] We have put the diagonal, here, in bold, since the subset of \( {\mathbb{N}}^{2} \) consisting of the pairs lying on the diagonal, i.e., \[ \{ \langle 0,0\rangle ,\langle 1,1\rangle ,\langle 2,2\rangle ,\ldots \} \] is the identity relation on \( \mathbb{N} \) . (Since the identity relation is popular, let’s define \( {\operatorname{Id}}_{A} = \{ \langle x, x\rangle : x \in X\} \) for any set \( A \) .) The subset of all pairs lying above the diagonal, i.e., \[ L = \{ \langle 0,1\rangle ,\langle 0,2\rangle ,\ldots ,\langle 1,2\rangle ,\langle 1,3\rangle ,\ldots ,\langle 2,3\rangle ,\langle 2,4\rangle ,\ldots \} , \] is the less than relation, i.e., \( \operatorname{Ln}m \) iff \( n < m \) . The subset of pairs below the diagonal, i.e., \[ G = \{ \langle 1,0\rangle ,\langle 2,0\rangle ,\langle 2,1\rangle ,\langle 3,0\rangle ,\langle 3,1\rangle ,\langle 3,2\rangle ,\ldots \} , \] is the greater than relation, i.e., \( G{nm} \) iff \( n > m \) . The union of \( L \) with \( I \), which we might call \( K = L \cup I \), is the less than or equal to relation: \( {Knm} \) iff \( n \leq m \) . Similarly, \( H = G \cup I \) is the greater than or equal to relation. These relations \( L, G \) , \( K \), and \( H \) are special kinds of relations called orders. \( L \) and \( G \) have the property that no number bears \( L \) or \( G \) to itself (i.e., for all \( n \), neither \( {Lnn} \) nor \( {Gnn} \) ). Relations with this property are called irreflexive, and, if they also happen to be orders, they are called strict orders.
|
Yes
|
Proposition 2.12. If \( R \subseteq {A}^{2} \) is an equivalence relation, then \( {Rxy} \) iff \( {\left\lbrack x\right\rbrack }_{R} = {\left\lbrack y\right\rbrack }_{R} \) .
|
Proof. For the left-to-right direction, suppose \( {Rxy} \), and let \( z \in {\left\lbrack x\right\rbrack }_{R} \) . By definition, then, \( {Rxz} \) . Since \( R \) is an equivalence relation, \( {Ryz} \) . (Spelling this out: as \( {Rxy} \) and \( R \) is symmetric we have \( {Ryx} \), and as \( {Rxz} \) and \( R \) is transitive we have \( {Ryz} \) .) So \( z \in {\left\lbrack y\right\rbrack }_{R} \) . Generalising, \( {\left\lbrack x\right\rbrack }_{R} \subseteq {\left\lbrack y\right\rbrack }_{R} \) . But exactly similarly, \( {\left\lbrack y\right\rbrack }_{R} \subseteq {\left\lbrack x\right\rbrack }_{R} \) . So \( {\left\lbrack x\right\rbrack }_{R} = {\left\lbrack y\right\rbrack }_{R} \), by extensionality.\n\nFor the right-to-left direction, suppose \( {\left\lbrack x\right\rbrack }_{R} = {\left\lbrack y\right\rbrack }_{R} \) . Since \( R \) is reflexive, \( {Ryy} \), so \( y \in {\left\lbrack y\right\rbrack }_{R} \) . Thus also \( y \in {\left\lbrack x\right\rbrack }_{R} \) by the assumption that \( {\left\lbrack x\right\rbrack }_{R} = {\left\lbrack y\right\rbrack }_{R} \) . So \( {Rxy} \).
|
Yes
|
A nice example of equivalence relations comes from modular arithmetic. For any \( a, b \), and \( n \in \mathbb{N} \), say that \( a{ \equiv }_{n}b \) iff dividing \( a \) by \( n \) gives remainder \( b \) . (Somewhat more symbolically: \( a{ \equiv }_{n}b \) iff \( \left( {\exists k \in \mathbb{N}}\right) a - b = {kn} \) .) Now, \( { \equiv }_{n} \) is an equivalence relation, for any \( n \) .
|
And there are exactly \( n \) distinct equivalence classes generated by \( { \equiv }_{n} \) ; that is, \( \mathbb{N}/{ \equiv }_{n} \) has \( n \) elements. These are: the set of numbers divisible by \( n \) without remainder, i.e., \( {\left\lbrack 0\right\rbrack }_{{ \equiv }_{n}} \) ; the set of numbers divisible by \( n \) with remainder 1, i.e., \( {\left\lbrack 1\right\rbrack }_{{ \equiv }_{n}};\ldots \) ; and the set of numbers divisible by \( n \) with remainder \( n - 1 \), i.e., \( {\left\lbrack n - 1\right\rbrack }_{{ \equiv }_{n}} \) .
|
Yes
|
Consider the no longer than relation \( \preccurlyeq \) on \( {\mathbb{B}}^{ * } : x \preccurlyeq y \) iff \( \operatorname{len}\left( x\right) \leq \) len \( \left( y\right) \) .
|
This is a preorder (reflexive and transitive), and even connected, but not a partial order, since it is not anti-symmetric. For instance, \( {01} \preccurlyeq {10} \) and \( {10} \preccurlyeq {01} \), but \( {01} \neq {10} \).
|
Yes
|
Proposition 2.25. If \( R \) is a strict order on \( A \), then \( {R}^{ + } = R \cup {\operatorname{Id}}_{A} \) is a partial order. Moreover, if \( R \) is total, then \( {R}^{ + } \) is a linear order.
|
Proof. Suppose \( R \) is a strict order, i.e., \( R \subseteq {A}^{2} \) and \( R \) is irreflexive, asymmetric, and transitive. Let \( {R}^{ + } = R \cup {\operatorname{Id}}_{A} \) . We have to show that \( {R}^{ + } \) is reflexive, antisymmetric, and transitive.\n\n\( {R}^{ + } \) is clearly reflexive, since \( \langle x, x\rangle \in {\operatorname{Id}}_{A} \subseteq {R}^{ + } \) for all \( x \in A \) .\n\nTo show \( {R}^{ + } \) is antisymmetric, suppose for reductio that \( {R}^{ + }{xy} \) and \( {R}^{ + }{yx} \) but \( x \neq y \) . Since \( \langle x, y\rangle \in R \cup {\operatorname{Id}}_{X} \), but \( \langle x, y\rangle \notin {\operatorname{Id}}_{X} \), we must have \( \langle x, y\rangle \in R \) , i.e., \( {Rxy} \) . Similarly, \( {Ryx} \) . But this contradicts the assumption that \( R \) is asymmetric.\n\nTo establish transitivity, suppose that \( {R}^{ + }{xy} \) and \( {R}^{ + }{yz} \) . If both \( \langle x, y\rangle \in R \) and \( \langle y, z\rangle \in R \), then \( \langle x, z\rangle \in R \) since \( R \) is transitive. Otherwise, either \( \langle x, y\rangle \in \) \( {\operatorname{Id}}_{X} \), i.e., \( x = y \), or \( \langle y, z\rangle \in {\operatorname{Id}}_{X} \), i.e., \( y = z \) . In the first case, we have that \( {R}^{ + }{yz} \) by assumption, \( x = y \), hence \( {R}^{ + }{xz} \) . Similarly in the second case. In either case, \( {R}^{ + }{xz} \), thus, \( {R}^{ + } \) is also transitive.\n\nConcerning the \
|
Yes
|
Proposition 2.26. If \( R \) is a partial order on \( X \), then \( {R}^{ - } = R \smallsetminus {\operatorname{Id}}_{X} \) is a strict order. Moreover, if \( R \) is linear, then \( {R}^{ - } \) is total.
|
Proof. This is left as an exercise.
|
No
|
Proposition 2.28. If \( < \) totally orders \( A \), then:\n\n\[ \left( {\forall a, b \in A}\right) \left( {\left( {\forall x \in A}\right) \left( {x < a \leftrightarrow x < b}\right) \rightarrow a = b}\right) \]
|
Proof. Suppose \( \left( {\forall x \in A}\right) \left( {x < a \leftrightarrow x < b}\right) \) . If \( a < b \), then \( a < a \), contradicting the fact that \( < \) is irreflexive; so \( a \nless b \) . Exactly similarly, \( b \nless a \) . So \( a = b \), as \( < \) is connected.
|
Yes
|
Example 3.3. Multiplication is a function because it pairs each input-each pair of natural numbers-with a single output: \( \times : {\mathbb{N}}^{2} \rightarrow \mathbb{N} \) .
|
By contrast, the square root operation applied to the domain \( \mathbb{N} \) is not functional, since each positive integer \( n \) has two square roots: \( \sqrt{n} \) and \( - \sqrt{n} \) . We can make it functional by only returning the positive square root: \( \sqrt{} : \mathbb{N} \rightarrow \mathbb{R} \) .
|
Yes
|
Example 3.5. Let \( f : \mathbb{N} \rightarrow \mathbb{N} \) be defined such that \( f\left( x\right) = x + 1 \). This is a definition that specifies \( f \) as a function which takes in natural numbers and outputs natural numbers. It tells us that, given a natural number \( x, f \) will output its successor \( x + 1 \).
|
In this case, the codomain \( \mathbb{N} \) is not the range of \( f \) , since the natural number 0 is not the successor of any natural number. The range of \( f \) is the set of all positive integers, \( {\mathbb{Z}}^{ + } \).
|
Yes
|
Let \( g : \mathbb{N} \rightarrow \mathbb{N} \) be defined such that \( g\left( x\right) = x + 2 - 1 \) . This tells us that \( g \) is a function which takes in natural numbers and outputs natural numbers. Given a natural number \( n, g \) will output the predecessor of the successor of the successor of \( x \), i.e., \( x + 1 \) .
|
We just considered two functions, \( f \) and \( g \), with different definitions. However, these are the same function. After all, for any natural number \( n \), we have that \( f\left( n\right) = n + 1 = n + 2 - 1 = g\left( n\right) \) . Otherwise put: our definitions for \( f \) and \( g \) specify the same mapping by means of different equations. Implicitly, then, we are relying upon a principle of extensionality for functions, \[ \text{if}\forall {xf}\left( x\right) = g\left( x\right) \text{, then}f = g \] provided that \( f \) and \( g \) share the same domain and codomain.
|
Yes
|
We can also define functions by cases. For instance, we could define \( h : \mathbb{N} \rightarrow \mathbb{N} \) by\n\n\[ h\left( x\right) = \left\{ \begin{array}{ll} \frac{x}{2} & \text{ if }x\text{ is even } \\ \frac{x + 1}{2} & \text{ if }x\text{ is odd. } \end{array}\right. \]
|
Since every natural number is either even or odd, the output of this function will always be a natural number. Just remember that if you define a function by cases, every possible input must fall into exactly one case.
|
Yes
|
Proposition 3.13. Let \( R \subseteq A \times B \) be such that:\n\n1. If \( {Rxy} \) and \( {Rxz} \) then \( y = z \) ; and\n\n2. for every \( x \in A \) there is some \( y \in B \) such that \( \langle x, y\rangle \in R \) .\n\nThen \( R \) is the graph of the function \( f : A \rightarrow B \) defined by \( f\left( x\right) = y \) iff \( {Rxy} \) .
|
Proof. Suppose there is a \( y \) such that \( {Rxy} \) . If there were another \( z \neq y \) such that \( {Rxz} \), the condition on \( R \) would be violated. Hence, if there is a \( y \) such that \( {Rxy} \), this \( y \) is unique, and so \( f \) is well-defined. Obviously, \( {R}_{f} = R \) .
|
Yes
|
Proposition 3.16. Every bijection has a unique inverse.
|
Proof. Exercise.
|
No
|
Proposition 3.17. Every function \( f \) has at most one inverse.
|
Proof. Exercise.
|
No
|
Consider the functions \( f\left( x\right) = x + 1 \), and \( g\left( x\right) = {2x} \). Since \( \left( {g \circ f}\right) \left( x\right) = g\left( {f\left( x\right) }\right) \), for each input \( x \) you must first take its successor, then multiply the result by two. So their composition is given by \( \left( {g \circ f}\right) \left( x\right) = \) \( 2\left( {x + 1}\right) \) .
|
Since \( \left( {g \circ f}\right) \left( x\right) = g\left( {f\left( x\right) }\right) \), for each input \( x \) you must first take its successor, then multiply the result by two. So their composition is given by \( \left( {g \circ f}\right) \left( x\right) = \) \( 2\left( {x + 1}\right) \).
|
Yes
|
Proposition 3.24. Suppose \( R \subseteq A \times B \) has the property that whenever \( {Rxy} \) and \( {Rx}{y}^{\prime } \) then \( y = {y}^{\prime } \) . Then \( R \) is the graph of the partial function \( f : X \rightarrow Y \) defined by: if there is a \( y \) such that \( {Rxy} \), then \( f\left( x\right) = y \), otherwise \( f\left( x\right) \uparrow \) . If \( R \) is also serial, i.e., for each \( x \in X \) there is a \( y \in Y \) such that \( {Rxy} \), then \( f \) is total.
|
Proof. Suppose there is a \( y \) such that \( {Rxy} \) . If there were another \( {y}^{\prime } \neq y \) such that \( {Rx}{y}^{\prime } \), the condition on \( R \) would be violated. Hence, if there is a \( y \) such that \( {Rxy} \), that \( y \) is unique, and so \( f \) is well-defined. Obviously, \( {R}_{f} = R \) and \( f \) is total if \( R \) is serial.
|
Yes
|
Proposition 4.2. If \( A \) has an enumeration, it has an enumeration without repetitions.
|
Proof. Suppose \( A \) has an enumeration \( {x}_{1},{x}_{2},\ldots \) in which each \( {x}_{i} \) is an element of \( A \) . We can remove repetitions from an enumeration by removing repeated elements. For instance, we can turn the enumeration into a new one in which we list \( {x}_{i} \) if it is an element of \( A \) that is not among \( {x}_{1},\ldots ,{x}_{i - 1} \) or remove \( {x}_{i} \) from the list if it already appears among \( {x}_{1},\ldots ,{x}_{i - 1} \) .
|
Yes
|
The function \( f\left( n\right) = {\left( -1\right) }^{n}\left\lceil \frac{\left( n - 1\right) }{2}\right\rceil \) (where \( \lceil x\rceil \) denotes the ceiling function, which rounds \( x \) up to the nearest integer) enumerates the set of integers \( \mathbb{Z} \).
|
Notice how \( f \) generates the values of \( \mathbb{Z} \) by \
|
No
|
Proposition 4.8. There is a surjection \( f : {\mathbb{Z}}^{ + } \rightarrow A \) iff there is a surjection \( g : \mathbb{N} \rightarrow \) A.
|
Proof. Given a surjection \( f : {\mathbb{Z}}^{ + } \rightarrow A \), we can define \( g\left( n\right) = f\left( {n + 1}\right) \) for all \( n \in \mathbb{N} \) . It is easy to see that \( g : \mathbb{N} \rightarrow A \) is surjective. Conversely, given a surjection \( g : \mathbb{N} \rightarrow A \), define \( f\left( n\right) = g\left( {n + 1}\right) \) .
|
Yes
|
Proposition 4.10. If \( f : {\mathbb{Z}}^{ + } \rightarrow A \) is surjective (i.e., an enumeration of \( A \) ), there is a bijection \( g : Z \rightarrow A \) where \( Z \) is either \( {\mathbb{Z}}^{ + } \) or \( \{ 1,\ldots, n\} \) for some \( n \in {\mathbb{Z}}^{ + } \) .
|
Proof. We define the function \( g \) recursively: Let \( g\left( 1\right) = f\left( 1\right) \) . If \( g\left( i\right) \) has already been defined, let \( g\left( {i + 1}\right) \) be the first value of \( f\left( 1\right), f\left( 2\right) ,\ldots \) not already among \( g\left( 1\right) ,\ldots, g\left( i\right) \), if there is one. If \( A \) has just \( n \) elements, then \( g\left( 1\right) ,\ldots \) , \( g\left( n\right) \) are all defined, and so we have defined a function \( g : \{ 1,\ldots, n\} \rightarrow A \) . If \( A \) has infinitely many elements, then for any \( i \) there must be an element of \( A \) in the enumeration \( f\left( 1\right), f\left( 2\right) ,\ldots \), which is not already among \( g\left( 1\right) ,\ldots, g\left( i\right) \) . In this case we have defined a funtion \( g : {\mathbb{Z}}^{ + } \rightarrow A \) .
|
Yes
|
Corollary 4.11. A set \( A \) is enumerable iff it is empty or there is a bijection \( f : N \rightarrow \) \( A \) where either \( N = \mathbb{N} \) or \( N = \{ 0,\ldots, n\} \) for some \( n \in \mathbb{N} \) .
|
Proof. \( A \) is enumerable iff \( A \) is empty or there is a surjective \( f : {\mathbb{Z}}^{ + } \rightarrow A \) . By Proposition 4.10, the latter holds iff there is a bijective function \( f : Z \rightarrow A \) where \( Z = {\mathbb{Z}}^{ + } \) or \( Z = \{ 1,\ldots, n\} \) for some \( n \in {\mathbb{Z}}^{ + } \) . By the same argument as in the proof of Proposition 4.8, that in turn is the case iff there is a bijection \( g : N \rightarrow A \) where either \( N = \mathbb{N} \) or \( N = \{ 0,\ldots, n - 1\} \) .
|
Yes
|
Proposition 4.12. \( \mathbb{N} \times \mathbb{N} \) is enumerable.
|
Proof. Let \( f : \mathbb{N} \rightarrow \mathbb{N} \times \mathbb{N} \) take each \( k \in \mathbb{N} \) to the tuple \( \langle n, m\rangle \in \mathbb{N} \times \mathbb{N} \) such that \( k \) is the value of the \( n \) th row and \( m \) th column in Cantor’s zig-zag array. \( ▱ \)
|
No
|
Proposition 4.13. \( {\mathbb{N}}^{n} \) is enumerable, for every \( n \in \mathbb{N} \) .
|
Cantor’s zig-zag method makes the enumerability of \( {\mathbb{N}}^{n} \) visually evident. But let us focus on our array depicting \( {\mathbb{N}}^{2} \) . Following the zig-zag line in the array and counting the places, we can check that \( \langle 1,2\rangle \) is associated with the number 7. However, it would be nice if we could compute this more directly. That is, it would be nice to have to hand the inverse of the zig-zag enumeration, \( g : {\mathbb{N}}^{2} \rightarrow \mathbb{N} \), such that\n\n\[ g\left( {\langle 0,0\rangle }\right) = 0, g\left( {\langle 0,1\rangle }\right) = 1, g\left( {\langle 1,0\rangle }\right) = 2,\ldots, g\left( {\langle 1,2\rangle }\right) = 7,\ldots \]\n\nIn fact, we can define \( g \) directly by making two observations. First: if the \( n \) th row and \( m \) th column contains value \( v \), then the \( \left( {n + 1}\right) \) st row and \( \left( {m - 1}\right) \) st column contains value \( v + 1 \) . Second: the first row of our enumeration consists of the triangular numbers, starting with \( 0,1,3,5 \), etc. The \( k \) th triangular number is the sum of the natural numbers \( < k \), which can be computed as \( k\left( {k + 1}\right) /2 \) . Putting these two observations together, consider this function:\n\n\[ g\left( {n, m}\right) = \frac{\left( {n + m + 1}\right) \left( {n + m}\right) }{2} + n \]\n\nWe often just write \( g\left( {n, m}\right) \) rather that \( g\left( {\langle n, m\rangle }\right) \), since it is easier on the eyes. This tells you first to determine the \( {\left( n + m\right) }^{\text{th }} \) triangle number, and then subtract \( n \) from it. And it populates the array in exactly the way we would like. So in particular, the pair \( \langle 1,2\rangle \) is sent to \( \frac{4 \times 3}{2} + 1 = 7 \) .\n\nThis function \( g \) is the inverse of an enumeration of a set of pairs. Such functions are called pairing functions.
|
Yes
|
The function \( h : {\mathbb{N}}^{2} \rightarrow \mathbb{N} \) given by\n\n\[ h\left( {n, m}\right) = {2}^{n}\left( {{2m} + 1}\right) - 1 \]\n\nis a pairing function for the set of pairs of natural numbers \( {\mathbb{N}}^{2} \) .
|
Accordingly, in our second enumeration of \( {\mathbb{N}}^{2} \), the pair \( \langle 0,0\rangle \) has code \( h\left( {0,0}\right) = {2}^{0}\left( {2 \cdot 0 + 1}\right) - 1 = 0;\langle 1,2\rangle \) has code \( {2}^{1} \cdot \left( {2 \cdot 2 + 1}\right) - 1 = 2 \cdot 5 - 1 = 9 \) ; \( \langle 2,6\rangle \) has code \( {2}^{2} \cdot \left( {2 \cdot 6 + 1}\right) - 1 = {51} \) .
|
No
|
Theorem 4.18. \( \wp \left( {\mathbb{Z}}^{ + }\right) \) is not enumerable.
|
Proof. We proceed in the same way, by showing that for every list of subsets of \( {\mathbb{Z}}^{ + } \) there is a subset of \( {\mathbb{Z}}^{ + } \) which cannot be on the list. Suppose the following is a given list of subsets of \( {\mathbb{Z}}^{ + } \) :\n\n\[ \n{Z}_{1},{Z}_{2},{Z}_{3},\ldots \n\] \n\nWe now define a set \( \bar{Z} \) such that for any \( n \in {\mathbb{Z}}^{ + }, n \in \bar{Z} \) iff \( n \notin {Z}_{n} \) :\n\n\[ \n\bar{Z} = \left\{ {n \in {\mathbb{Z}}^{ + } : n \notin {Z}_{n}}\right\} \n\] \n\n\( \bar{Z} \) is clearly a set of positive integers, since by assumption each \( {Z}_{n} \) is, and thus \( \bar{Z} \in \wp \left( {\mathbb{Z}}^{ + }\right) \) . But \( \bar{Z} \) cannot be on the list. To show this, we’ll establish that for each \( k \in {\mathbb{Z}}^{ + },\bar{Z} \neq {Z}_{k} \) .\n\nSo let \( k \in {\mathbb{Z}}^{ + } \) be arbitrary. We’ve defined \( \bar{Z} \) so that for any \( n \in {\mathbb{Z}}^{ + }, n \in \bar{Z} \) iff \( n \notin {Z}_{n} \) . In particular, taking \( n = k, k \in \bar{Z} \) iff \( k \notin {Z}_{k} \) . But this shows that \( \bar{Z} \neq {Z}_{k} \), since \( \bar{k} \) is an element of one but not the other, and so \( \bar{Z} \) and \( {Z}_{k} \) have different elements. Since \( k \) was arbitrary, \( \bar{Z} \) is not on the list \( {Z}_{1},{Z}_{2},\ldots \)
|
Yes
|
Proposition 4.20. Equinumerosity is an equivalence relation.
|
Proof. We must show that equinumerosity is reflexive, symmetric, and transitive. Let \( A, B \), and \( C \) be sets.\n\nReflexivity. The identity map \( {\operatorname{Id}}_{A} : A \rightarrow A \), where \( {\operatorname{Id}}_{A}\left( x\right) = x \) for all \( x \in A \) , is a bijection. So \( A \approx A \) .\n\nSymmetry. Suppose \( A \approx B \), i.e., there is a bijection \( f : A \rightarrow B \) . Since \( f \) is bijective, its inverse \( {f}^{-1} \) exists and is also bijective. Hence, \( {f}^{-1} : B \rightarrow A \) is a bijection, so \( B \approx A \) .\n\nTransitivity. Suppose that \( A \approx B \) and \( B \approx C \), i.e., there are bijections \( f : A \rightarrow B \) and \( g : B \rightarrow C \) . Then the composition \( g \circ f : A \rightarrow C \) is bijective, so that \( A \approx C \) .
|
Yes
|
Proposition 4.21. If \( A \approx B \), then \( A \) is enumerable if and only if \( B \) is.
|
Proof. Suppose \( A \approx B \), so there is some bijection \( f : A \rightarrow B \), and suppose that \( A \) is enumerable. Then either \( A = \varnothing \) or there is a surjective function \( g : {\mathbb{Z}}^{ + } \rightarrow \) \( A \) . If \( A = \varnothing \), then \( B = \varnothing \) also (otherwise there would be an element \( y \in B \) but no \( x \in A \) with \( g\left( x\right) = y \) ). If, on the other hand, \( g : {\mathbb{Z}}^{ + } \rightarrow A \) is surjective, then \( g \circ f : {\mathbb{Z}}^{ + } \rightarrow B \) is surjective. To see this, let \( y \in B \) . Since \( g \) is surjective, there is an \( x \in A \) such that \( g\left( x\right) = y \) . Since \( f \) is surjective, there is an \( n \in {\mathbb{Z}}^{ + } \) such that \( f\left( n\right) = x \) . Hence,\n\n\[ \left( {g \circ f}\right) \left( n\right) = g\left( {f\left( n\right) }\right) = g\left( x\right) = y \]\n\nand thus \( g \circ f \) is surjective. We have that \( g \circ f \) is an enumeration of \( B \), and so \( B \) is enumerable.\n\nIf \( B \) is enumerable, we obtain that \( A \) is enumerable by repeating the argument with the bijection \( {f}^{-1} : B \rightarrow A \) instead of \( f \) .
|
Yes
|
Theorem 4.24 (Cantor). \( A \prec \wp \left( A\right) \), for any set \( A \) .
|
Proof. The map \( f\left( x\right) = \{ x\} \) is an injection \( f : A \rightarrow \wp \left( A\right) \), since if \( x \neq y \) , then also \( \{ x\} \neq \{ y\} \) by extensionality, and so \( f\left( x\right) \neq f\left( y\right) \) . So we have that \( A \preccurlyeq \wp \left( A\right) \) . We show that there cannot be a surjective function \( g : A \rightarrow \wp \left( A\right) \), let alone a bijective one, and hence that \( A ≉ \wp \left( A\right) \) . For suppose that \( g : A \rightarrow \wp \left( A\right) \) . Since \( g \) is total, every \( x \in A \) is mapped to a subset \( g\left( x\right) \subseteq A \) . We show that \( g \) cannot be surjective. To do this, we define a subset \( \bar{A} \subseteq A \) which by definition cannot be in the range of \( g \) . Let \[ \bar{A} = \{ x \in A : x \notin g\left( x\right) \} . \] Since \( g\left( x\right) \) is defined for all \( x \in A,\bar{A} \) is clearly a well-defined subset of \( A \) . But, it cannot be in the range of \( g \) . Let \( x \in A \) be arbitrary, we show that \( \bar{A} \neq g\left( x\right) \) . If \( x \in g\left( x\right) \), then it does not satisfy \( x \notin g\left( x\right) \), and so by the definition of \( \bar{A} \), we have \( x \notin \bar{A} \) . If \( x \in \bar{A} \), it must satisfy the defining property of \( \bar{A} \), i.e., \( x \in A \) and \( x \notin g\left( x\right) \) . Since \( x \) was arbitrary, this shows that for each \( x \in \bar{A}, x \in g\left( x\right) \) iff \( x \notin \bar{A} \), and so \( g\left( x\right) \neq \bar{A} \) . In other words, \( \bar{A} \) cannot be in the range of \( g \) , contradicting the assumption that \( g \) is surjective.
|
Yes
|
Let \( \lceil x\rceil \) be the ceiling function, which rounds \( x \) up to the nearest integer. Then the function \( f : \mathbb{N} \rightarrow \mathbb{Z} \) given by:\n\n\[ f\left( n\right) = {\left( -1\right) }^{n}\left\lceil \frac{n}{2}\right\rceil \]\n\nenumerates the set of integers \( \mathbb{Z} \) as follows:\n\n\[ f\left( 0\right) \;f\left( 1\right) \;f\left( 2\right) \;f\left( 3\right) \;f\left( 4\right) \;f\left( 5\right) \;f\left( 6\right) \;\ldots \]\n\n\[ \left\lceil \frac{0}{2}\right\rceil \; - \left\lceil \frac{1}{2}\right\rceil \;\left\lceil \frac{2}{2}\right\rceil \; - \left\lceil \frac{3}{2}\right\rceil \;\left\lceil \frac{4}{2}\right\rceil \; - \left\lceil \frac{5}{2}\right\rceil \;\left\lceil \frac{6}{2}\right\rceil \;\ldots \]\n\n\[ \begin{array}{llllllll} 0 & - 1 & 1 & - 2 & 2 & - 3 & 3 & \ldots \end{array} \]
|
Notice how \( f \) generates the values of \( \mathbb{Z} \) by \
|
Yes
|
Theorem 4.31. \( {\mathbb{B}}^{\omega } \) is non-enumerable.
|
Proof. Consider any enumeration of a subset of \( {\mathbb{B}}^{\omega } \) . So we have some list \( {s}_{0} \) , \( {s}_{1},{s}_{2},\ldots \) where every \( {s}_{n} \) is an infinite string of \( {0}^{\prime }\mathrm{s} \) and \( {1}^{\prime }\mathrm{s} \) . Let \( {s}_{n}\left( m\right) \) be the \( n \) th digit of the \( m \) th string in this list. So we can now think of our list as an array, where \( {s}_{n}\left( m\right) \) is placed at the \( n \) th row and \( m \) th column:\n\n<table><thead><tr><th></th><th>0</th><th>1</th><th>2</th><th>3</th><th>:</th></tr></thead><tr><td>0</td><td>\( {\mathbf{s}}_{\mathbf{0}}\left( \mathbf{0}\right) \)</td><td>\( {s}_{0}\left( 1\right) \)</td><td>\( {s}_{0}\left( 2\right) \)</td><td>\( {s}_{0}\left( 3\right) \)</td><td>...</td></tr><tr><td>1</td><td>\( {s}_{1}\left( 0\right) \)</td><td>\( {\mathbf{s}}_{\mathbf{1}}\left( \mathbf{1}\right) \)</td><td>\( {s}_{1}\left( 2\right) \)</td><td>\( {s}_{1}\left( 3\right) \)</td><td>. . .</td></tr><tr><td>2</td><td>\( {s}_{2}\left( 0\right) \)</td><td>\( {s}_{2}\left( 1\right) \)</td><td>\( {\mathbf{s}}_{2}\left( 2\right) \)</td><td>\( {s}_{2}\left( 3\right) \)</td><td>. . .</td></tr><tr><td>3</td><td>\( {s}_{3}\left( 0\right) \)</td><td>\( {s}_{3}\left( 1\right) \)</td><td>\( {s}_{3}\left( 2\right) \)</td><td>\( {\mathbf{s}}_{3}\left( 3\right) \)</td><td>...</td></tr><tr><td>:</td><td>:</td><td>:</td><td>:</td><td>:</td><td></td></tr></table>\n\nWe will now construct an infinite string, \( d \), of \( {0}^{\prime }\mathrm{s} \) and \( {1}^{\prime }\mathrm{s} \) which is not on this list. We will do this by specifying each of its entries, i.e., we specify \( d\left( n\right) \) for all \( n \in \mathbb{N} \) . Intuitively, we do this by reading down the diagonal of the array above (hence the name \
|
Yes
|
Theorem 4.32. \( \wp \left( \mathbb{N}\right) \) is not enumerable.
|
Proof. We proceed in the same way, by showing that every list of subsets of \( \mathbb{N} \) omits some subset of \( \mathbb{N} \) . So, suppose that we have some list \( {N}_{0},{N}_{1},{N}_{2},\ldots \) of subsets of \( \mathbb{N} \) . We define a set \( D \) as follows: \( n \in D \) iff \( n \notin {N}_{n} \) :\n\n\[ D = \left\{ {n \in \mathbb{N} : n \notin {N}_{n}}\right\} \]\n\nClearly \( D \subseteq \mathbb{N} \) . But \( D \) cannot be on the list. After all, by construction \( n \in D \) iff \( n \notin {N}_{n} \), so that \( D \neq {N}_{n} \) for any \( n \in \mathbb{N} \) .
|
Yes
|
Proposition 5.1. \( \sim \) is an equivalence relation.
|
Proof. We must show that \( \sim \) is reflexive, symmetric, and transitive.\n\nReflexivity: Evidently \( \langle a, b\rangle \sim \langle a, b\rangle \), since \( a + b = b + a \).\n\nSymmetry: Suppose \( \langle a, b\rangle \sim \langle c, d\rangle \), so \( a + d = c + b \). Then \( c + b = a + d \), so that \( \langle c, d\rangle \sim \langle a, b\rangle \).\n\nTransitivity: Suppose \( \langle a, b\rangle \sim \langle c, d\rangle \sim \langle m, n\rangle \). So \( a + d = c + b \) and \( c + n = m + d \). So \( a + d + c + n = c + b + m + d \), and so \( a + n = m + b \). Hence \( \langle a, b\rangle \sim \langle m, n\rangle .
|
Yes
|
Theorem 5.4. \( \sqrt{2} \) is not rational, i.e., \( \sqrt{2} \notin \mathbb{Q} \)
|
Proof. Suppose, for reductio, that \( \sqrt{2} \) is rational. So \( \sqrt{2} = m/n \) for some natural numbers \( m \) and \( n \) . Indeed, we can choose \( m \) and \( n \) so that the fraction cannot be reduced any further. Re-organising, \( {m}^{2} = 2{n}^{2} \) . From here, we can complete the proof in two ways:\n\nFirst, geometrically (following Tennenbaum). \( {}^{2} \) Consider these squares: \n\nSince \( {m}^{2} = 2{n}^{2} \), the region where the two squares of side \( n \) overlap has the same area as the region which neither of the two squares cover; i.e., the area of the orange square equals the sum of the area of the two unshaded squares. So where the orange square has side \( p \), and each unshaded square has side\n\n---\n\n\( {}^{2} \) This proof is reported by Conway (2006).\n\n---\n\n\( q,{p}^{2} = 2{q}^{2} \) . But now \( \sqrt{2} = p/q \), with \( p < m \) and \( q < n \) and \( p, q \in \mathbb{N} \) . This contradicts the fact that \( m \) and \( n \) were chosen to be as small as possible.\n\nSecond, formally. Since \( {m}^{2} = 2{n}^{2} \), it follows that \( m \) is even. (It is easy to show that, if \( x \) is odd, then \( {x}^{2} \) is odd.) So \( m = {2r} \), for some \( r \in \mathbb{N} \) . Rearranging, \( 2{r}^{2} = {n}^{2} \), so \( n \) is also even. So both \( m \) and \( n \) are even, and hence the fraction \( m/n \) can be reduced further. Contradiction!
|
Yes
|
Theorem 5.6. The set of cuts has the Completeness Property.
|
Proof. Let \( S \) be any non-empty set of cuts with an upper bound. Let \( \lambda = \bigcup S \) . We first claim that \( \lambda \) is a cut: 1. Since \( S \) has an upper bound, at least one cut is in \( S \), so \( \varnothing \neq \alpha \) . Since \( S \) is a set of cuts, \( \lambda \subseteq \mathbb{Q} \) . Since \( S \) has an upper bound, some \( p \in \mathbb{Q} \) is absent from every cut \( \alpha \in S \) . So \( p \notin \lambda \), and hence \( \lambda \subsetneq \mathbb{Q} \) . 2. Suppose \( p < q \in \lambda \) . So there is some \( \alpha \in S \) such that \( q \in \alpha \) . Since \( \alpha \) is a cut, \( p \in \alpha \) . So \( p \in \lambda \) . 3. Suppose \( p \in \lambda \) . So there is some \( \alpha \in S \) such that \( p \in \alpha \) . Since \( \alpha \) is a cut, there is some \( q \in \alpha \) such that \( p < q \) . So \( q \in \lambda \) . This proves the claim. Moreover, clearly \( \left( {\forall \alpha \in S}\right) \alpha \subseteq \bigcup S = \lambda \) . So now consider any cut \( \kappa < \lambda \), i.e., \( \kappa \subsetneq \lambda \) . So there is some \( p \in \lambda \smallsetminus \kappa \) . Since \( p \in \lambda \) , there is some \( \alpha \in S \) such that \( p \in \alpha \) . So \( \kappa \nsubseteq \alpha \), and hence \( \kappa \) is not an upper bound on \( S \) . So \( \lambda \) is the least upper bound on \( S \) .
|
Yes
|
Theorem 5.11. The Cauchy sequences constitute an ordered field.
|
Proof. Exercise.
|
No
|
Theorem 5.12. Every non-empty set of Cauchy sequences with an upper bound has a least upper bound.
|
Proof sketch. Let \( S \) be any non-empty set of Cauchy sequences with an upper bound. So there is some \( p \in \mathbb{Q} \) such that \( {p}_{\mathbb{R}} \) is an upper bound for \( S \) . Let \( r \in S \) ; then there is some \( q \in \mathbb{Q} \) such that \( {q}_{\mathbb{R}} < r \) . So if a least upper bound on \( S \) exists, it is between \( {q}_{\mathbb{R}} \) and \( {p}_{\mathbb{R}} \) (inclusive).\n\nWe will hone in on the l.u.b., by approaching it simultaneously from below and above. In particular, we define two functions, \( f, g : \mathbb{N} \rightarrow \mathbb{Q} \), with the aim that \( f \) will hone in on the l.u.b. from above, and \( g \) will hone on in it from below. We start by defining:\n\n\[ f\left( 0\right) = p \]\n\n\[ g\left( 0\right) = q \]\n\nThen, where \( {a}_{n} = \frac{f\left( n\right) + g\left( n\right) }{2} \), let: \( {}^{6} \)\n\n\[ f\left( {n + 1}\right) = \left\{ \begin{array}{ll} {a}_{n} & \text{ if }{\left( {a}_{n}\right) }_{\mathbb{R}}\text{ is an upper bound for }S \\ f\left( n\right) & \text{ otherwise } \end{array}\right. \]\n\n\[ g\left( {n + 1}\right) = \left\{ \begin{array}{ll} {a}_{n} & \text{ if }{\left( {a}_{n}\right) }_{\mathbb{R}}\text{ is a lower bound for }S \\ g\left( n\right) & \text{ otherwise } \end{array}\right. \]\n\n\( {}^{6} \) This is a recursive definition. But we have not yet given any reason to think that recursive definitions are ok.\n\nBoth \( f \) and \( g \) are Cauchy sequences. (This can be checked fairly easily; but we leave it as an exercise.) Note that the function \( \left( {f - g}\right) \) tends to 0, since the difference between \( f \) and \( g \) halves at every step. Hence \( \left\lbrack f\right\rbrack = \left\lbrack g\right\rbrack \) .\n\nTo show that \( \left\lbrack f\right\rbrack \) is an upper bound for \( S \), we will invoke Theorem 5.11. Let \( \left\lbrack h\right\rbrack \in S \) and suppose, for reductio, that \( \left\lbrack f\right\rbrack < \left\lbrack h\right\rbrack \), so that \( {0}_{\mathbb{R}} < \left\lbrack \left( {h - f}\right) \right\rbrack \) . Since \( f \) is a monotonically decreasing Cauchy sequence, there is some \( k \in \mathbb{N} \) such that \( \left\lbrack \left( {{c}_{f\left( k\right) } - f}\right) \right\rbrack < \left\lbrack \left( {h - f}\right) \right\rbrack \) . So:\n\n\[ {\left( f\left( k\right) \right) }_{\mathbb{R}} = \left\lbrack {c}_{f\left( k\right) }\right\rbrack < \left\lbrack f\right\rbrack + \left\lbrack \left( {h - f}\right) \right\rbrack = \left\lbrack h\right\rbrack ,\]\n\ncontradicting the fact that \( \left( {f{\left( k\right) }_{\mathbb{R}}}\right) \) is, by construction, an upper bound for \( S \) .\n\nIn an exactly similar way, we can show that \( \left\lbrack g\right\rbrack \) is a lower bound for \( S \) . So \( \left\lbrack f\right\rbrack = \left\lbrack g\right\rbrack \) is the least upper bound for \( S \) .
|
Yes
|
Lemma 6.3. For any function \( f \) and any \( o \in A \): 1. \( o \in {\operatorname{clo}}_{f}\left( o\right) \) ; and 2. \( {\operatorname{clo}}_{f}\left( o\right) \) is \( f \) -closed; and 3. if \( X \) is \( f \) -closed and \( o \in X \), then \( {\operatorname{clo}}_{f}\left( o\right) \subseteq X \)
|
Proof. Note that there is at least one \( f \) -closed set, namely \( \operatorname{ran}\left( f\right) \cup \{ o\} \) . So \( {\operatorname{clo}}_{f}\left( o\right) \), the intersection of all such sets, exists. We must now check (1)-(3). (1). \( o \in {\operatorname{clo}}_{f}\left( o\right) \) as it is an intersection of sets which all have \( o \) as an element. (2). Let \( X \) be \( f \) -closed with \( o \in X \) . If \( x \in {\operatorname{clo}}_{f}\left( o\right) \), then \( x \in X \), and now \( f\left( x\right) \in X \) as \( X \) is \( f \) -closed, so \( f\left( x\right) \in {\operatorname{clo}}_{f}\left( o\right) \) . (3). This follows from the general fact that if \( X \in C \) then \( \cap C \subseteq X \) .
|
Yes
|
Theorem 6.5. If there is a Dedekind infinite set, then there is a Dedekind algebra.
|
Proof. Let \( D \) be Dedekind infinite. So there is an injection \( g : D \rightarrow D \) and an element \( o \in D \smallsetminus \operatorname{ran}\left( g\right) \) . Now let \( A = {\operatorname{clo}}_{g}\left( o\right) \), and note that \( o \in A \) . Let \( f = g{ \upharpoonright }_{A} \) . We will show that this constitutes a Dedekind algebra.\n\nConcerning (1): \( o \notin \operatorname{ran}\left( g\right) \) and \( \operatorname{ran}\left( f\right) \subseteq \operatorname{ran}\left( g\right) \) so \( o \notin \operatorname{ran}\left( f\right) \) .\n\nConcerning (2): \( g \) is an injection on \( D \) ; so \( f \subseteq g \) must be an injection.\n\nConcerning (3): Let \( o \in B \) . By Lemma 6.3, if \( B \subsetneq A \), then \( B \) is not \( g \) -closed. So \( B \) is not \( f \) -closed either, as \( f = {\left. g\right| }_{A} \) . So \( A \) is the smallest \( f \) -closed set with \( o \) as an element, i.e., \( A = {\operatorname{clo}}_{f}\left( o\right) \) .
|
Yes
|
Theorem 6.6 (Arithmetical induction). Let \( N, s \), o yield a Dedekind algebra. Then for any set \( X \) :\n\n\[ \text{if}o \in X\text{and}\left( {\forall n \in N \cap X}\right) s\left( n\right) \in X\text{, then}N \subseteq X\text{.} \]
|
Proof. By the definition of a Dedekind algebra, \( N = {\operatorname{clo}}_{s}\left( o\right) \) . Now if both \( o \in X \) and \( \left( {\forall n \in N}\right) \left( {n \in X \rightarrow s\left( n\right) \in X}\right) \), then \( N = {\operatorname{clo}}_{s}\left( o\right) \subseteq X \) .
|
Yes
|
Corollary 6.7. Let \( N, s, o \) yield a Dedekind algebra. Then for any formula \( \varphi \left( x\right) \) , which may have parameters:\n\n\[ \text{if}\varphi \left( o\right) \text{and}\left( {\forall n \in N}\right) \left( {\varphi \left( n\right) \rightarrow \varphi \left( {s\left( n\right) }\right) }\right) \text{, then}\left( {\forall n \in N}\right) \varphi \left( n\right) \]
|
Proof. Let \( X = \{ n \in N : \varphi \left( n\right) \} \), and now use Theorem 6.6
|
No
|
Proposition 6.8. For any function \( f \), and any \( B \): 1. \( B \subseteq {\operatorname{Clo}}_{f}\left( B\right) \) ; and 2. \( {\operatorname{Clo}}_{f}\left( B\right) \) is \( f \) -closed; and 3. if \( X \) is \( f \) -closed and \( B \subseteq X \), then \( {\operatorname{Clo}}_{f}\left( B\right) \subseteq X \) .
|
Proof. Exactly as in Lemma 6.3.
|
No
|
Proposition 6.9. If \( A \subseteq B \subseteq C \) and \( A \approx C \), then \( A \approx B \approx C \).
|
Proof. Given a bijection \( f : C \rightarrow A \), let \( F = {\operatorname{Clo}}_{f}\left( {C \smallsetminus B}\right) \) and define a function \( g \) with domain \( C \) as follows:\n\n\[ g\left( x\right) = \left\{ \begin{array}{ll} f\left( x\right) & \text{ if }x \in F \\ x & \text{ otherwise } \end{array}\right. \]\n\nWe’ll show that \( g \) is a bijection from \( C \rightarrow B \), from which it will follow that \( g \circ {f}^{-1} : A \rightarrow B \) is a bijection, completing the proof.\n\n## 6.5. A PROOF OF SCHRÖDER-BERNSTEIN\n\nFirst we claim that if \( x \in F \) but \( y \notin F \) then \( g\left( x\right) \neq g\left( y\right) \). For reductio suppose otherwise, so that \( y = g\left( y\right) = g\left( x\right) = f\left( x\right) \). Since \( x \in F \) and \( F \) is \( f \) -closed by Proposition 6.8, we have \( y = f\left( x\right) \in F \), a contradiction.\n\nNow suppose \( g\left( x\right) = g\left( y\right) \). So, by the above, \( x \in F \) iff \( y \in F \). If \( x, y \in F \), then \( f\left( x\right) = g\left( x\right) = g\left( y\right) = f\left( y\right) \) so that \( x = y \) since \( f \) is a bijection. If \( x, y \notin F \), then \( x = g\left( x\right) = g\left( y\right) = y \). So \( g \) is an injection.\n\nIt remains to show that \( \operatorname{ran}\left( g\right) = B \). So fix \( x \in B \subseteq C \). If \( x \notin F \), then \( g\left( x\right) = x \). If \( x \in F \), then \( x = f\left( y\right) \) for some \( y \in F \), since \( x \in B \) and \( F \) is the smallest \( f \) -closed set extending \( C \smallsetminus B \), so that \( g\left( y\right) = f\left( y\right) = x \).
|
Yes
|
Theorem 7.4 (Principle of induction on formulas). If some property \( P \) holds for all the atomic formulas and is such that\n\n1. it holds for \( \neg \varphi \) whenever it holds for \( \varphi \) ;\n\n2. it holds for \( \left( {\varphi \land \psi }\right) \) whenever it holds for \( \varphi \) and \( \psi \) ;\n\n3. it holds for \( \left( {\varphi \vee \psi }\right) \) whenever it holds for \( \varphi \) and \( \psi \) ;\n\n4. it holds for \( \left( {\varphi \rightarrow \psi }\right) \) whenever it holds for \( \varphi \) and \( \psi \) ;\n\nthen \( P \) holds for all formulas.
|
Proof. Let \( S \) be the collection of all formulas with property \( P \) . Clearly \( S \subseteq \) \( \operatorname{Frm}\left( {\mathcal{L}}_{0}\right) \) . \( S \) satisfies all the conditions of Definition 7.1: it contains all atomic formulas and is closed under the logical operators. \( \operatorname{Frm}\left( {\mathcal{L}}_{0}\right) \) is the smallest such class, so \( \operatorname{Frm}\left( {\mathcal{L}}_{0}\right) \subseteq S \) . So \( \operatorname{Frm}\left( {\mathcal{L}}_{0}\right) = S \), and every formula has property \( P \) .
|
Yes
|
Proposition 7.7 (Unique Readability). Any formula \( \varphi \) in \( \operatorname{Frm}\left( {\mathcal{L}}_{0}\right) \) has exactly one parsing as one of the following\n\n1. \( \bot \) .\n\n2. \( {p}_{n} \) for some \( {p}_{n} \in {\mathrm{{At}}}_{0} \) .\n\n3. \( \neg \psi \) for some formula \( \psi \) .\n\n4. \( \left( {\psi \land \chi }\right) \) for some formulas \( \psi \) and \( \chi \) .\n\n5. \( \left( {\psi \vee \chi }\right) \) for some formulas \( \psi \) and \( \chi \) .\n\n6. \( \left( {\psi \rightarrow \chi }\right) \) for some formulas \( \psi \) and \( \chi \) .\n\nMoreover, this parsing is unique.
|
Proof. By induction on \( \varphi \) . For instance, suppose that \( \varphi \) has two distinct readings as \( \left( {\psi \rightarrow \chi }\right) \) and \( \left( {{\psi }^{\prime } \rightarrow {\chi }^{\prime }}\right) \) . Then \( \psi \) and \( {\psi }^{\prime } \) must be the same (or else one would be a proper initial segment of the other); so if the two readings of \( \varphi \) are distinct it must be because \( \chi \) and \( {\chi }^{\prime } \) are distinct readings of the same sequence of symbols, which is impossible by the inductive hypothesis.
|
Yes
|
Theorem 7.11 (Local Determination). Suppose that \( {\mathfrak{v}}_{1} \) and \( {\mathfrak{v}}_{2} \) are valuations that agree on the propositional letters occurring in \( \varphi \), i.e., \( {\mathfrak{v}}_{1}\left( {p}_{n}\right) = {\mathfrak{v}}_{2}\left( {p}_{n}\right) \) whenever \( {p}_{n} \) occurs in some formula \( \varphi \) . Then \( \overline{{\mathfrak{v}}_{1}} \) and \( \overline{{\mathfrak{v}}_{2}} \) also agree on \( \varphi \), i.e., \( \overline{{\mathfrak{v}}_{1}}\left( \varphi \right) = \overline{{\mathfrak{v}}_{2}}\left( \varphi \right) \) .
|
Proof. By induction on \( \varphi \) .
|
No
|
Proposition 7.13. \( \mathfrak{v} \vDash \varphi \) iff \( \overline{\mathfrak{v}}\left( \varphi \right) = \mathbb{T} \) .
|
Proof. By induction on \( \varphi \) .
|
No
|
Proposition 7.15. 1. \( \varphi \) is a tautology if and only if \( \varnothing \vDash \varphi \) ;
|
Proof. Exercise.
|
No
|
Proposition 7.16. \( \Gamma \vDash \varphi \) if and only if \( \Gamma \cup \{ \neg \varphi \} \) is unsatisfiable.
|
Proof. Exercise.
|
No
|
Theorem 7.17 (Semantic Deduction Theorem). \( \Gamma \vDash \varphi \rightarrow \psi \) if and only if \( \Gamma \cup \) \( \{ \varphi \} \vDash \psi \) .
|
Proof. Exercise.
|
No
|
Every initial sequent, e.g., \( \chi \Rightarrow \chi \) is a derivation.
|
We can obtain a new derivation from this by applying, say, the WL rule,\n\n\[ \frac{\Gamma \Rightarrow \Delta }{\varphi ,\Gamma \Rightarrow \Delta }\mathrm{{WL}} \]\n\nThe rule, however, is meant to be general: we can replace the \( \varphi \) in the rule with any sentence, e.g., also with \( \theta \) . If the premise matches our initial sequent \( \chi \Rightarrow \chi \), that means that both \( \Gamma \) and \( \Delta \) are just \( \chi \), and the conclusion would then be \( \theta ,\chi \Rightarrow \chi \) . So, the following is a derivation:\n\n\[ \frac{\chi \Rightarrow \chi }{\theta ,\chi \Rightarrow \chi }\mathrm{{WL}} \]\n\nWe can now apply another rule, say XL, which allows us to switch two sentences on the left. So, the following is also a correct derivation:\n\n\[ \begin{aligned} \chi & \Rightarrow \chi \\ \theta ,\chi & \Rightarrow \chi \\ \chi ,\theta & \Rightarrow \chi \end{aligned}\begin{array}{l} \mathrm{{WL}} \\ \mathrm{{XL}} \end{array} \]\n\nIn this application of the rule, which was given as\n\n\[ \frac{\Gamma ,\varphi ,\psi ,\Pi \Rightarrow \Delta }{\Gamma ,\psi ,\varphi ,\Pi \Rightarrow \Delta ,}\mathrm{{XL}} \]\n\nboth \( \Gamma \) and \( \Pi \) were empty, \( \Delta \) is \( \chi \), and the roles of \( \varphi \) and \( \psi \) are played by \( \theta \) and \( \chi \), respectively. In much the same way, we also see that\n\n\[ \frac{\theta \Rightarrow \theta }{\chi ,\theta \Rightarrow \theta }\mathrm{{WL}} \]\n\nis a derivation. Now we can take these two derivations, and combine them using \( \land \mathrm{R} \) . That rule was\n\n\[ \frac{\Gamma \Rightarrow \Delta ,\varphi \;\Gamma \Rightarrow \Delta ,\psi }{\Gamma \Rightarrow \Delta ,\varphi \land \psi } \land \mathrm{R} \]\n\nIn our case, the premises must match the last sequents of the derivations ending in the premises. That means that \( \Gamma \) is \( \chi ,\theta ,\Delta \) is empty, \( \varphi \) is \( \chi \) and \( \psi \) is \( \theta \) . So the conclusion, if the inference should be correct, is \( \chi ,\theta \Rightarrow \chi \land \theta \) .\n\n\[ \begin{array}{l} \frac{\chi \Rightarrow \chi }{\theta ,\chi \Rightarrow \chi }\mathrm{{WL}} \\ \frac{\theta ,\theta \Rightarrow \chi }{\chi ,\theta \Rightarrow \chi }\mathrm{{XL}}\;\frac{\theta \Rightarrow \theta }{\chi ,\theta \Rightarrow \theta }\mathrm{{WL}} \\ \end{array} \]\n\nOf course, we can also reverse the premises, then \( \varphi \) would be \( \theta \) and \( \psi \) would be \( \chi \) .\n\n\[ \frac{\theta \Rightarrow \theta }{\chi ,\theta \Rightarrow \theta \land \chi }\mathrm{{WL}}\;\frac{\frac{\chi \Rightarrow \chi }{\theta ,\chi \Rightarrow \chi }\mathrm{{WL}}}{\chi ,\theta \Rightarrow \chi }\mathrm{{XL}} \]
|
Yes
|
Give an LK-derivation for the sequent \( \varphi \land \psi \Rightarrow \varphi \) .
|
We begin by writing the desired end-sequent at the bottom of the derivation.\n\n\[ \varphi \land \psi \Rightarrow \varphi \]\n\nNext, we need to figure out what kind of inference could have a lower sequent of this form. This could be a structural rule, but it is a good idea to start by looking for a logical rule. The only logical connective occurring in the lower sequent is \( \land \), so we’re looking for an \( \land \) rule, and since the \( \land \) symbol occurs in the antecedent, we’re looking at the \( \land \mathrm{L} \) rule.\n\n\[ \varphi \land \psi \Rightarrow \varphi \]\n\nThere are two options for what could have been the upper sequent of the \( \land \mathrm{L} \) inference: we could have an upper sequent of \( \varphi \Rightarrow \varphi \), or of \( \psi \Rightarrow \varphi \) . Clearly, \( \varphi \Rightarrow \varphi \) is an initial sequent (which is a good thing), while \( \psi \Rightarrow \varphi \) is not derivable in general. We fill in the upper sequent:\n\n\[ \frac{\varphi \Rightarrow \varphi }{\varphi \land \psi \Rightarrow \varphi } \land \mathrm{L} \]\n\nWe now have a correct LK-derivation of the sequent \( \varphi \land \psi \Rightarrow \varphi \) .
|
Yes
|
Suppose we want to prove \( \Rightarrow \varphi \vee \neg \varphi \) .
|
Applying VR backwards would give us one of these two derivations:\n\n\[ \begin{array}{l} \Rightarrow \varphi \\ \Rightarrow \varphi \vee \neg \varphi \vee \mathrm{R} \\ \end{array} \]\n\nNeither of these of course ends in an initial sequent. The trick is to realize that the contraction rule allows us to combine two copies of a sentence into one-and when we're searching for a proof, i.e., going from bottom to top, we can keep a copy of \( \varphi \vee \neg \varphi \) in the premise, e.g.,\n\n\[ \begin{array}{l} \Rightarrow \varphi \vee \neg \varphi ,\varphi \\ \Rightarrow \varphi \vee \neg \varphi ,\varphi \vee \neg \varphi \\ \Rightarrow \varphi \vee \neg \varphi \end{array}\mathrm{{CR}} \]\n\nNow we can apply \( \vee \mathrm{R} \) a second time, and also get \( \neg \varphi \), which leads to a complete derivation.\n\n\[ \begin{array}{l} \varphi \Rightarrow \varphi \\ \frac{ \Rightarrow \varphi ,\neg \varphi \neg \mathrm{R}}{ \Rightarrow \varphi ,\varphi \vee \neg \varphi } \vee \mathrm{R} \\ \frac{ \Rightarrow \varphi \vee \neg \varphi ,\varphi }{ \Rightarrow \varphi \vee \neg \varphi ,\varphi \vee \neg \varphi } \vee \mathrm{R} \\ \Rightarrow \varphi \vee \neg \varphi \end{array} \]
|
Yes
|
Proposition 9.12 (Reflexivity). If \( \varphi \in \Gamma \), then \( \Gamma \vdash \varphi \) .
|
Proof. The initial sequent \( \varphi \Rightarrow \varphi \) is derivable, and \( \{ \varphi \} \subseteq \Gamma \) .
|
Yes
|
Proposition 9.13 (Monotony). If \( \Gamma \subseteq \Delta \) and \( \Gamma \vdash \varphi \), then \( \Delta \vdash \varphi \) .
|
Proof. Suppose \( \Gamma \vdash \varphi \), i.e., there is a finite \( {\Gamma }_{0} \subseteq \Gamma \) such that \( {\Gamma }_{0} \Rightarrow \varphi \) is derivable. Since \( \Gamma \subseteq \Delta \), then \( {\Gamma }_{0} \) is also a finite subset of \( \Delta \) . The derivation of \( {\Gamma }_{0} \Rightarrow \varphi \) thus also shows \( \mathit{Δ} \vdash \varphi \) .
|
Yes
|
Proposition 9.14 (Transitivity). If \( \Gamma \vdash \varphi \) and \( \{ \varphi \} \cup \Delta \vdash \psi \), then \( \Gamma \cup \Delta \vdash \psi \) .
|
Proof. If \( \Gamma \vdash \varphi \), there is a finite \( {\Gamma }_{0} \subseteq \Gamma \) and a derivation \( {\pi }_{0} \) of \( {\Gamma }_{0} \Rightarrow \varphi \) . If \( \{ \varphi \} \cup \Delta \vdash \psi \), then for some finite subset \( {\Delta }_{0} \subseteq \Delta \), there is a derivation \( {\pi }_{1} \) of \( \varphi ,{\Delta }_{0} \Rightarrow \psi \) . Consider the following derivation:\n\n\n\nSince \( {\Gamma }_{0} \cup {\Delta }_{0} \subseteq \Gamma \cup \Delta \), this shows \( \Gamma \cup \Delta \vdash \psi \) .
|
Yes
|
Proposition 9.15. \( \Gamma \) is inconsistent iff \( \Gamma \vdash \varphi \) for every sentence \( \varphi \) .
|
Proof. Exercise.
|
No
|
Proposition 9.16 (Compactness). 1. If \( \Gamma \vdash \varphi \) then there is a finite subset \( {\Gamma }_{0} \subseteq \Gamma \) such that \( {\Gamma }_{0} \vdash \varphi \) . 2. If every finite subset of \( \Gamma \) is consistent, then \( \Gamma \) is consistent.
|
Proof. 1. If \( \Gamma \vdash \varphi \), then there is a finite subset \( {\Gamma }_{0} \subseteq \Gamma \) such that the sequent \( {\Gamma }_{0} \Rightarrow \varphi \) has a derivation. Consequently, \( {\Gamma }_{0} \vdash \varphi \) . 2. If \( \Gamma \) is inconsistent, there is a finite subset \( {\Gamma }_{0} \subseteq \Gamma \) such that LK derives \( {\Gamma }_{0} \Rightarrow \) . But then \( {\Gamma }_{0} \) is a finite subset of \( \Gamma \) that is inconsistent.
|
Yes
|
Proposition 9.17. If \( \Gamma \vdash \varphi \) and \( \Gamma \cup \{ \varphi \} \) is inconsistent, then \( \Gamma \) is inconsistent.
|
Proof. There are finite \( {\Gamma }_{0} \) and \( {\Gamma }_{1} \subseteq \Gamma \) such that \( \mathbf{{LK}} \) derives \( {\Gamma }_{0} \Rightarrow \varphi \) and \( \varphi ,{\Gamma }_{1} \Rightarrow \) . Let the LK-derivation of \( {\Gamma }_{0} \Rightarrow \varphi \) be \( {\pi }_{0} \) and the LK-derivation of \( {\Gamma }_{1},\varphi \Rightarrow \) be \( {\pi }_{1} \) . We can then derive\n\n\n\nSince \( {\Gamma }_{0} \subseteq \Gamma \) and \( {\Gamma }_{1} \subseteq \Gamma ,{\Gamma }_{0} \cup {\Gamma }_{1} \subseteq \Gamma \), hence \( \Gamma \) is inconsistent.
|
Yes
|
Proposition 9.18. \( \Gamma \vdash \varphi \) iff \( \Gamma \cup \{ \neg \varphi \} \) is inconsistent.
|
Proof. First suppose \( \Gamma \vdash \varphi \), i.e., there is a derivation \( {\pi }_{0} \) of \( \Gamma \Rightarrow \varphi \) . By adding a \( \neg \mathrm{L} \) rule, we obtain a derivation of \( \neg \varphi ,\Gamma \Rightarrow \), i.e., \( \Gamma \cup \{ \neg \varphi \} \) is inconsistent.\n\nIf \( \Gamma \cup \{ \neg \varphi \} \) is inconsistent, there is a derivation \( {\pi }_{1} \) of \( \neg \varphi ,\Gamma \Rightarrow \) . The following is a derivation of \( \Gamma \Rightarrow \varphi \) :\n\n
|
Yes
|
Proposition 9.19. If \( \Gamma \vdash \varphi \) and \( \neg \varphi \in \Gamma \), then \( \Gamma \) is inconsistent.
|
Proof. Suppose \( \Gamma \vdash \varphi \) and \( \neg \varphi \in \Gamma \) . Then there is a derivation \( \pi \) of a sequent \( {\Gamma }_{0} \Rightarrow \varphi \) . The sequent \( \neg \varphi ,{\Gamma }_{0} \Rightarrow \) is also derivable:\n\n\[ \begin{array}{l} {\Gamma }_{0} \Rightarrow \varphi \;\frac{\varphi \Rightarrow \varphi }{\varphi ,\neg \varphi \Rightarrow }\mathrm{{XL}} \\ \end{array} \]\n\nSince \( \neg \varphi \in \Gamma \) and \( {\Gamma }_{0} \subseteq \Gamma \), this shows that \( \Gamma \) is inconsistent.
|
Yes
|
Proposition 9.20. If \( \Gamma \cup \{ \varphi \} \) and \( \Gamma \cup \{ \neg \varphi \} \) are both inconsistent, then \( \Gamma \) is inconsistent.
|
Proof. There are finite sets \( {\Gamma }_{0} \subseteq \Gamma \) and \( {\Gamma }_{1} \subseteq \Gamma \) and LK-derivations \( {\pi }_{0} \) and \( {\pi }_{1} \) of \( \varphi ,{\Gamma }_{0} \Rightarrow \) and \( \neg \varphi ,{\Gamma }_{1} \Rightarrow \), respectively. We can then derive \n\nSince \( {\Gamma }_{0} \subseteq \Gamma \) and \( {\Gamma }_{1} \subseteq \Gamma ,{\Gamma }_{0} \cup {\Gamma }_{1} \subseteq \Gamma \) . Hence \( \Gamma \) is inconsistent.
|
Yes
|
Proposition 9.21. 1. Both \( \varphi \land \psi \vdash \varphi \) and \( \varphi \land \psi \vdash \psi \). 2. \( \varphi ,\psi \vdash \varphi \land \psi \) .
|
Proof. 1. Both sequents \( \varphi \land \psi \Rightarrow \varphi \) and \( \varphi \land \psi \Rightarrow \psi \) are derivable:\n\n\[ \frac{\varphi \Rightarrow \varphi }{\varphi \land \psi \Rightarrow \varphi } \land \mathrm{L}\;\frac{\psi \Rightarrow \psi }{\varphi \land \psi \Rightarrow \psi } \land \mathrm{L} \]\n\n2. Here is a derivation of the sequent \( \varphi ,\psi \Rightarrow \varphi \land \psi \) :\n\n\[ \frac{\varphi \Rightarrow \varphi \;\psi \Rightarrow \psi }{\varphi ,\psi \Rightarrow \varphi \land \psi } \land \mathrm{R} \]
|
Yes
|
Proposition 9.22. 1. \( \varphi \vee \psi ,\neg \varphi ,\neg \psi \) is inconsistent.
|
Proof. 1. We give a derivation of the sequent \( \varphi \vee \psi ,\neg \varphi ,\neg \psi \Rightarrow \) :\n\n\[ \frac{\varphi \Rightarrow \varphi }{\neg \varphi ,\varphi \Rightarrow }\neg \mathrm{L}\;\frac{\psi \Rightarrow \psi }{\neg \psi ,\psi \Rightarrow }\neg \mathrm{L} \]\n\n\[ \frac{\varphi ,\neg \varphi ,\neg \psi \Rightarrow \psi ,\neg \varphi ,\neg \psi \Rightarrow }{\varphi \vee \psi ,\neg \varphi ,\neg \psi \Rightarrow } \vee \mathrm{L} \]\n\n(Recall that double inference lines indicate several weakening, contraction, and exchange inferences.)
|
Yes
|
Proposition 9.23. 1. \( \varphi ,\varphi \rightarrow \psi \vdash \psi \) .
|
1. The sequent \( \varphi \rightarrow \psi ,\varphi \Rightarrow \psi \) is derivable:\n\n\[ \frac{\varphi \Rightarrow \varphi \;\psi \Rightarrow \psi }{\varphi \rightarrow \psi ,\varphi \Rightarrow \psi } \rightarrow \mathrm{L} \]
|
No
|
Corollary 9.27. If \( \Gamma \vdash \varphi \) then \( \Gamma \vDash \varphi \) .
|
Proof. If \( \Gamma \vdash \varphi \) then for some finite subset \( {\Gamma }_{0} \subseteq \Gamma \), there is a derivation of \( {\Gamma }_{0} \Rightarrow \varphi \) . By Theorem 9.25, every valuation \( \mathfrak{v} \) either makes some \( \psi \in {\Gamma }_{0} \) false or makes \( \varphi \) true. Hence, if \( \mathfrak{v} \vDash \Gamma \) then also \( \mathfrak{v} \vDash \varphi \) .
|
Yes
|
Corollary 9.28. If \( \Gamma \) is satisfiable, then it is consistent.
|
Proof. We prove the contrapositive. Suppose that \( \Gamma \) is not consistent. Then there is a finite \( {\Gamma }_{0} \subseteq \Gamma \) and a derivation of \( {\Gamma }_{0} \Rightarrow \) . By Theorem 9.25, \( {\Gamma }_{0} \Rightarrow \) is valid. In other words, for every valuation \( \mathfrak{v} \), there is \( \chi \in {\Gamma }_{0} \) so that \( \mathfrak{v} \mathrel{\text{\vDash \not{} }} \chi \) , and since \( {\Gamma }_{0} \subseteq \Gamma \), that \( \chi \) is also in \( \Gamma \) . Thus, no \( \mathfrak{v} \) satisfies \( \Gamma \), and \( \Gamma \) is not satisfiable.
|
Yes
|
Every assumption on its own is a derivation. So, e.g., \( \chi \) by itself is a derivation, and so is \( \theta \) by itself. We can obtain a new derivation from these by applying, say, the \( \land \) Intro rule,
|
\[ \frac{\varphi }{\varphi \land \psi } \land \text{ Intro } \] These rules are meant to be general: we can replace the \( \varphi \) and \( \psi \) in it with any sentences, e.g., by \( \chi \) and \( \theta \) . Then the conclusion would be \( \chi \land \theta \), and so \[ \frac{\chi \;\theta }{\chi \land \theta } \land \text{Intro} \] is a correct derivation. Of course, we can also switch the assumptions, so that \( \theta \) plays the role of \( \varphi \) and \( \chi \) that of \( \psi \) . Thus, \[ \frac{\theta }{\theta \land \chi } \land \text{ Intro } \] is also a correct derivation.
|
Yes
|
Let’s give a derivation of the sentence \( \left( {\varphi \land \psi }\right) \rightarrow \varphi \) .
|
\[ \frac{\frac{{\left\lbrack \varphi \land \psi \right\rbrack }^{1}}{\varphi } \land \text{ Elim }}{1\frac{\left( {\varphi \land \psi }\right) \rightarrow \varphi }{\left( {\varphi \land \psi }\right) \rightarrow \varphi } \rightarrow \text{ Intro }} \]
|
No
|
For instance, suppose we want to derive \( \varphi \vee \neg \varphi \) . Our usual strategy would be to attempt to derive \( \varphi \vee \neg \varphi \) using VIntro. But this would require us to derive either \( \varphi \) or \( \neg \varphi \) from no assumptions, and this can’t be done. \( { \bot }_{C} \) to the rescue!
|
Now we’re looking for a derivation of \( \bot \) from \( \neg \left( {\varphi \vee \neg \varphi }\right) \) . Since \( \bot \) is the conclusion of \( \neg \) Elim we might try that:\n\n\n\nOur strategy for finding a derivation of \( \neg \varphi \) calls for an application of \( \neg \) Intro: \n\nHere, we can get \( \bot \) easily by applying \( \neg \) Elim to the assumption \( \neg \left( {\varphi \vee \neg \varphi }\right) \) and \( \varphi \vee \neg \varphi \) which follows from our new assumption \( \varphi \) by \( \vee \) Intro: \n\nOn the right side we use the same strategy, except we get \( \varphi \) by \( { \bot }_{C} \) : 
|
Yes
|
Proposition 10.10 (Reflexivity). If \( \varphi \in \Gamma \), then \( \Gamma \vdash \varphi \) .
|
Proof. The assumption \( \varphi \) by itself is a derivation of \( \varphi \) where every undischarged assumption (i.e., \( \varphi \) ) is in \( \Gamma \) .
|
Yes
|
Proposition 10.11 (Monotony). If \( \Gamma \subseteq \Delta \) and \( \Gamma \vdash \varphi \), then \( \Delta \vdash \varphi \) .
|
Proof. Any derivation of \( \varphi \) from \( \Gamma \) is also a derivation of \( \varphi \) from \( \Delta \) .
|
Yes
|
Proposition 10.12 (Transitivity). If \( \Gamma \vdash \varphi \) and \( \{ \varphi \} \cup \Delta \vdash \psi \), then \( \Gamma \cup \Delta \vdash \psi \) .
|
Proof. If \( \Gamma \vdash \varphi \), there is a derivation \( {\delta }_{0} \) of \( \varphi \) with all undischarged assumptions in \( \Gamma \) . If \( \{ \varphi \} \cup \Delta \vdash \psi \), then there is a derivation \( {\delta }_{1} \) of \( \psi \) with all undischarged assumptions in \( \{ \varphi \} \cup \Delta \) . Now consider:\n\n\n\nThe undischarged assumptions are now all among \( \Gamma \cup \Delta \), so this shows \( \Gamma \cup \) \( \Delta \vdash \psi \) .
|
Yes
|
Proposition 10.13. \( \Gamma \) is inconsistent iff \( \Gamma \vdash \varphi \) for every sentence \( \varphi \) .
|
Proof. Exercise.
|
No
|
Proposition 10.14 (Compactness). 1. If \( \Gamma \vdash \varphi \) then there is a finite subset \( {\Gamma }_{0} \subseteq \Gamma \) such that \( {\Gamma }_{0} \vdash \varphi \).
|
Proof. 1. If \( \Gamma \vdash \varphi \), then there is a derivation \( \delta \) of \( \varphi \) from \( \Gamma \) . Let \( {\Gamma }_{0} \) be the set of undischarged assumptions of \( \delta \) . Since any derivation is finite, \( {\Gamma }_{0} \) can only contain finitely many sentences. So, \( \delta \) is a derivation of \( \varphi \) from a finite \( {\Gamma }_{0} \subseteq \Gamma \).
|
Yes
|
Proposition 10.15. If \( \Gamma \vdash \varphi \) and \( \Gamma \cup \{ \varphi \} \) is inconsistent, then \( \Gamma \) is inconsistent.
|
Proof. Let the derivation of \( \varphi \) from \( \Gamma \) be \( {\delta }_{1} \) and the derivation of \( \bot \) from \( \Gamma \cup \) \( \{ \varphi \} \) be \( {\delta }_{2} \) . We can then derive:\n\n\n\nIn the new derivation, the assumption \( \varphi \) is discharged, so it is a derivation from \( \Gamma \) .
|
Yes
|
Proposition 10.16. \( \Gamma \vdash \varphi \) iff \( \Gamma \cup \{ \neg \varphi \} \) is inconsistent.
|
Proof. First suppose \( \Gamma \vdash \varphi \), i.e., there is a derivation \( {\delta }_{0} \) of \( \varphi \) from undischarged assumptions \( \Gamma \) . We obtain a derivation of \( \bot \) from \( \Gamma \cup \{ \neg \varphi \} \) as follows:\n\n\n\nNow assume \( \Gamma \cup \{ \neg \varphi \} \) is inconsistent, and let \( {\delta }_{1} \) be the corresponding derivation of \( \bot \) from undischarged assumptions in \( \Gamma \cup \{ \neg \varphi \} \) . We obtain a derivation of \( \varphi \) from \( \Gamma \) alone by using \( { \bot }_{C} \) : 
|
Yes
|
Proposition 10.17. If \( \Gamma \vdash \varphi \) and \( \neg \varphi \in \Gamma \), then \( \Gamma \) is inconsistent.
|
Proof. Suppose \( \Gamma \vdash \varphi \) and \( \neg \varphi \in \Gamma \) . Then there is a derivation \( \delta \) of \( \varphi \) from \( \Gamma \) . Consider this simple application of the \( \neg \) Elim rule:\n\n\n\nSince \( \neg \varphi \in \Gamma \), all undischarged assumptions are in \( \Gamma \), this shows that \( \Gamma \vdash \bot \) . \( ▱ \)
|
Yes
|
Proposition 10.18. If \( \Gamma \cup \{ \varphi \} \) and \( \Gamma \cup \{ \neg \varphi \} \) are both inconsistent, then \( \Gamma \) is inconsistent.
|
Proof. There are derivations \( {\delta }_{1} \) and \( {\delta }_{2} \) of \( \bot \) from \( \Gamma \cup \{ \varphi \} \) and \( \bot \) from \( \Gamma \cup \{ \neg \varphi \} \) , respectively. We can then derive\n\n\n\nSince the assumptions \( \varphi \) and \( \neg \varphi \) are discharged, this is a derivation of \( \bot \) from \( \Gamma \) alone. Hence \( \Gamma \) is inconsistent.
|
Yes
|
Proposition 10.19. 1. Both \( \varphi \land \psi \vdash \varphi \) and \( \varphi \land \psi \vdash \psi \)\n\n2. \( \varphi ,\psi \vdash \varphi \land \psi \) .
|
Proof. 1. We can derive both\n\n\[ \frac{\varphi \land \psi }{\varphi } \land \operatorname{Elim}\;\frac{\varphi \land \psi }{\psi } \land \operatorname{Elim} \]\n\n2. We can derive:\n\n\[ \frac{\varphi \;\psi }{\varphi \land \psi } \land \text{Intro} \]
|
Yes
|
Proposition 10.20. 1. \( \varphi \vee \psi ,\neg \varphi ,\neg \psi \) is inconsistent.
|
Proof. 1. Consider the following derivation:\n\n\[ \frac{\varphi \vee \psi \;\frac{\neg \varphi \;{\left\lbrack \varphi \right\rbrack }^{1}}{ \bot }\neg \operatorname{Elim}\;\frac{\neg \psi \;{\left\lbrack \psi \right\rbrack }^{1}}{ \bot }\neg \operatorname{Elim}}{ \bot }\text{VElim} \]\n\nThis is a derivation of \( \bot \) from undischarged assumptions \( \varphi \vee \psi ,\neg \varphi \), and \( \neg \psi \) .
|
Yes
|
Proposition 10.21. 1. \( \varphi ,\varphi \rightarrow \psi \vdash \psi \) . 2. Both \( \neg \varphi \vdash \varphi \rightarrow \psi \) and \( \psi \vdash \varphi \rightarrow \psi \) .
|
Proof. 1. We can derive: \[ \frac{\varphi \rightarrow \psi \;\varphi }{\psi } \rightarrow \text{Elim} \] 2. This is shown by the following two derivations: \[ \begin{array}{l} \frac{\neg \varphi \;{\left\lbrack \varphi \right\rbrack }^{1}}{\frac{\frac{1}{\psi }{ \bot }_{I}}{\varphi \rightarrow \psi } \rightarrow \text{ Intro }} \\ \frac{\psi }{\varphi \rightarrow \psi } \rightarrow \text{ Intro }\;\frac{\psi }{\varphi \rightarrow \psi } \rightarrow \text{ Intro } \\ \end{array} \] Note that \( \rightarrow \) Intro may, but does not have to, discharge the assumption \( \varphi \) .
|
Yes
|
Corollary 10.24. If \( \Gamma \) is satisfiable, then it is consistent.
|
Proof. We prove the contrapositive. Suppose that \( \Gamma \) is not consistent. Then \( \Gamma \vdash \bot \), i.e., there is a derivation of \( \bot \) from undischarged assumptions in \( \Gamma \). By Theorem 10.22, any valuation \( \mathfrak{v} \) that satisfies \( \Gamma \) must satisfy \( \bot \). Since \( \mathfrak{v} \mathrel{\text{\vDash \not{} }} \bot \) for every valuation \( \mathfrak{v} \), no \( \mathfrak{v} \) can satisfy \( \Gamma \), i.e., \( \Gamma \) is not satisfiable.
|
Yes
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.