Q
stringlengths 4
3.96k
| A
stringlengths 1
3k
| Result
stringclasses 4
values |
|---|---|---|
Corollary 24.15. If \( \left\lbrack x\right\rbrack \neq \left\lbrack y\right\rbrack ,\left\lbrack x\right\rbrack \cap \left\lbrack y\right\rbrack = \varnothing \) .
|
Proof. Suppose \( z \in \left\lbrack x\right\rbrack \) and \( x \otimes y \) . Then \( z \otimes u \) for all \( u \in \left\lbrack y\right\rbrack \) . If \( z \in \left\lbrack y\right\rbrack \), we would have \( z \otimes z \) . Similarly if \( y \otimes x \) .
|
No
|
Proposition 24.16. If \( x \) and \( y \) are non-standard, then \( x \otimes x \oplus y \) and \( x \oplus y \notin \left\lbrack x\right\rbrack \) .
|
Proof. If \( y \) is nonstandard, then \( y \neq \mathbf{z} \) . \( \mathbf{{PA}} \vdash \forall x\left( {y \neq 0 \rightarrow x < \left( {x + y}\right) }\right) \) . Now suppose \( x \oplus y \in \left\lbrack x\right\rbrack \) . Since \( x \otimes x \oplus y \), we would have \( x \oplus {n}^{ * } = x \oplus y \) . But \( \mathbf{{PA}} \vdash \forall x\forall y\forall z\left( {\left( {x + y}\right) = \left( {x + z}\right) \rightarrow y = z}\right) \) (the cancellation law for addition). This would mean \( y = {n}^{ * } \) for some standard \( n \) ; but \( y \) is assumed to be nonstandard.
|
Yes
|
Proposition 24.17. There is no least non-standard block.
|
Proof. \( \mathbf{{PA}} \vdash \forall x\exists y\left( {\left( {y + y}\right) = x \vee {\left( y + y\right) }^{\prime } = x}\right) \), i.e., that every \( x \) is divisible by 2 (possibly with remainder 1). If \( x \) is non-standard, so is \( y \) . By the preceding proposition, \( y \otimes y \oplus y \) and \( y \oplus y \notin \left\lbrack y\right\rbrack \) . Then also \( y \otimes {\left( y \oplus y\right) }^{ * } \) and \( {\left( y \oplus y\right) }^{ * } \notin \) \( \left\lbrack y\right\rbrack \) . But \( x = y \oplus y \) or \( x = {\left( y \oplus y\right) }^{ * } \), so \( y \otimes x \) and \( y \notin \left\lbrack x\right\rbrack \) .
|
No
|
Proposition 24.19. The ordering of the blocks is dense. That is, if \( x \otimes y \) and \( \left\lbrack x\right\rbrack \neq \) \( \left\lbrack y\right\rbrack \), then there is a block \( \left\lbrack z\right\rbrack \) distinct from both that is between them.
|
Proof. Suppose \( x \otimes y \) . As before, \( x \oplus y \) is divisible by two (possibly with remainder): there is a \( z \in \left| \mathfrak{M}\right| \) such that either \( x \oplus y = z \oplus z \) or \( x \oplus y = {\left( z \oplus z\right) }^{ * } \) . The element \( z \) is the \
|
No
|
Example 24.21. Recall the structure \( \mathfrak{K} \) from Example 24.8. Its domain was \( \left| \mathfrak{K}\right| = \mathbb{N} \cup \{ a\} \) and interpretations\n\n\[{\mathrm{o}}^{\mathfrak{K}} = 0\]\n\n\[{\prime }^{\mathfrak{K}}\left( x\right) = \left\{ \begin{array}{ll} x + 1 & \text{ if }x \in \mathbb{N} \\ a & \text{ if }x = a \end{array}\right.\]\n\n\[{ + }^{\mathfrak{K}}\left( {x, y}\right) = \left\{ \begin{array}{ll} x + y & \text{ if }x, y \in \mathbb{N} \\ a & \text{ otherwise } \end{array}\right.\]\n\n\[{ \times }^{\mathfrak{K}}\left( {x, y}\right) = \left\{ \begin{array}{ll} {xy} & \text{ if }x, y \in \mathbb{N} \\ 0 & \text{ if }x = 0\text{ or }y = 0 \\ a & \text{ otherwise } \end{array}\right.\]\n\n\[{ < }^{\mathfrak{K}} = \{ \langle x, y\rangle : x, y \in \mathbb{N}\text{ and }x < y\} \cup \{ \langle x, a\rangle : n \in \left| \mathfrak{K}\right| \}\]\n\nBut \( \left| \mathfrak{K}\right| \) is denumerable and so is equinumerous with \( \mathbb{N} \) . For instance, \( g : \mathbb{N} \rightarrow \) \( \left| \mathfrak{K}\right| \) with \( g\left( 0\right) = a \) and \( g\left( n\right) = n + 1 \) for \( n > 0 \) is a bijection. We can turn it into an isomorphism between a new model \( {\mathfrak{K}}^{\prime } \) of \( \mathbf{Q} \) and \( \mathfrak{K} \) . In \( {\mathfrak{K}}^{\prime } \), we have to assign different functions and relations to the symbols of \( {\mathcal{L}}_{A} \), since different elements of \( \mathbb{N} \) play the roles of standard and non-standard numbers.\n\nSpecifically, 0 now plays the role of \( a \), not of the smallest standard number. The smallest standard number is now 1 . So we assign \( {\mathrm{o}}^{{\mathfrak{K}}^{\prime }} = 1 \) . The successor function is also different now: given a standard number, i.e., an \( n > 0 \), it still\n\nreturns \( n + 1 \) . But 0 now plays the role of \( a \), which is its own successor. So \( {\prime }^{{\mathfrak{K}}^{\prime }}\left( 0\right) = 0 \) . For addition and multiplication we likewise have\n\n\[{ + }^{{\mathfrak{K}}^{\prime }}\left( {x, y}\right) = \left\{ \begin{array}{ll} x + y - 1 & \text{ if }x, y > 0 \\ 0 & \text{ otherwise } \end{array}\right.\]\n\n\[{ \times }^{{\mathfrak{K}}^{\prime }}\left( {x, y}\right) = \left\{ \begin{array}{ll} 1 & \text{ if }x = 1\text{ or }y = 1 \\ {xy} - x - y + 2 & \text{ if }x, y > 1 \\ 0 & \text{ otherwise } \end{array}\right.\]\n\nAnd we have \( \langle x, y\rangle \in { < }^{{\mathfrak{K}}^{\prime }} \) iff \( x < y \) and \( x > 0 \) and \( y > 0 \), or if \( y = 0 \) .
|
All of these functions are computable functions of natural numbers and \( { < }^{{\mathfrak{K}}^{\prime }} \) is a decidable relation on \( \mathbb{N} \) -but they are not the same functions as successor, addition, and multiplication on \( \mathbb{N} \), and \( { < }^{{\mathfrak{K}}^{\prime }} \) is not the same relation as \( < \) on \( \mathbb{N} \) .
|
Yes
|
Prove that \( \mathfrak{K} \) from Example 24.8 satisifies the remaining axioms of \( \mathbf{Q} \)
|
\[ \forall x\left( {x \times 0}\right) = 0 \] \( \left( {Q}_{6}\right) \) \[ \forall x\forall y\left( {x \times {y}^{\prime }}\right) = \left( {\left( {x \times y}\right) + x}\right) \] \( \left( {Q}_{7}\right) \) \[ \forall x\forall y\left( {x < y \leftrightarrow \exists z\left( {{z}^{\prime } + x}\right) = y}\right) \] \( \left( {Q}_{8}\right) \)
|
No
|
Problem 24.4. Expand \( \mathfrak{L} \) of Example 24.9 to include \( \otimes \) and \( \otimes \) that interpret \( \times \) and \( < \) . Show that your structure satisifies the remaining axioms of \( \mathbf{Q} \),
|
\[ \forall x\left( {x \times 0}\right) = 0 \] \( \left( {Q}_{6}\right) \)\[ \forall x\forall y\left( {x \times {y}^{\prime }}\right) = \left( {\left( {x \times y}\right) + x}\right) \] \( \left( {Q}_{7}\right) \)\[ \forall x\forall y\left( {x < y \leftrightarrow \exists z\left( {{z}^{\prime } + x}\right) = y}\right) \] \( \left( {Q}_{8}\right) \)
|
No
|
Lemma 25.2. Suppose \( {\mathcal{L}}_{0} \) is the language containing every constant symbol, function symbol and predicate symbol (other than \( \doteq \) ) that occurs in both \( \Gamma \) and \( \Delta \), and let \( {\mathcal{L}}_{0}^{\prime } \) be obtained by the addition of infinitely many new constant symbols \( {c}_{n} \) for \( n \geq 0 \) . Then if \( \Gamma \) and \( \Delta \) are inseparable in \( {\mathcal{L}}_{0} \), they are also inseparable in \( {\mathcal{L}}_{0}^{\prime } \) .
|
Proof. We proceed indirectly: suppose by way of contradiction that \( \Gamma \) and \( \Delta \) are separated in \( {\mathcal{L}}_{0}^{\prime } \) . Then \( \Gamma \vDash \chi \left\lbrack {c/x}\right\rbrack \) and \( \Delta \vDash \neg \chi \left\lbrack {c/x}\right\rbrack \) for some \( \chi \in {\mathcal{L}}_{0} \) (where \( c \) is a new constant symbol—the case where \( \chi \) contains more than one such new constant symbol is similar). By compactness, there are finite subsets \( {\Gamma }_{0} \) of \( \Gamma \) and \( {\Delta }_{0} \) of \( \Delta \) such that \( {\Gamma }_{0} \vDash \chi \left\lbrack {c/x}\right\rbrack \) and \( {\Delta }_{0} \vDash \neg \chi \left\lbrack {c/x}\right\rbrack \) . Let \( \gamma \) be the conjunction of all formulas in \( {\Gamma }_{0} \) and \( \delta \) the conjunction of all formulas in \( {\Delta }_{0} \) . Then \[ \gamma \vDash \chi \left\lbrack {c/x}\right\rbrack ,\;\delta \vDash \neg \chi \left\lbrack {c/x}\right\rbrack . \] From the former, by Generalization, we have \( \gamma \vDash \forall {x\chi } \), and from the latter by contraposition, \( \chi \left\lbrack {c/x}\right\rbrack \vDash \neg \delta \), whence also \( \forall {x\chi } \vDash \neg \delta \) . Contraposition again gives \( \delta \vDash \neg \forall {x\chi } \) . By monotony, \[ \Gamma \vDash \forall {x\chi },\;\Delta \vDash \neg \forall {x\chi }, \] so that \( \forall {x\chi } \) separates \( \Gamma \) and \( \Delta \) in \( {\mathcal{L}}_{0} \) .
|
Yes
|
Lemma 25.3. Suppose that \( \Gamma \cup \{ \exists {x\sigma }\} \) and \( \Delta \) are inseparable, and \( c \) is a new constant symbol not in \( \Gamma ,\Delta \), or \( \sigma \) . Then \( \Gamma \cup \{ \exists {x\sigma },\sigma \left\lbrack {c/x}\right\rbrack \} \) and \( \Delta \) are also inseparable.
|
Proof. Suppose for contradiction that \( \chi \) separates \( \Gamma \cup \{ \exists {x\sigma },\sigma \left\lbrack {c/x}\right\rbrack \} \) and \( \Delta \) , while at the same time \( \Gamma \cup \{ \exists {x\sigma }\} \) and \( \Delta \) are inseparable. We distinguish two cases:\n\n1. \( c \) does not occur in \( \chi \) : in this case \( \Gamma \cup \{ \exists {x\sigma },\neg \chi \} \) is satisfiable (otherwise \( \chi \) separates \( \Gamma \cup \{ \exists {x\sigma }\} \) and \( \Delta \) ). It remains so if \( \sigma \left\lbrack {c/x}\right\rbrack \) is added, so \( \chi \) does not separate \( \Gamma \cup \{ \exists {x\sigma },\sigma \left\lbrack {c/x}\right\rbrack \} \) and \( \Delta \) after all.\n\n2. \( c \) does occur in \( \chi \) so that \( \chi \) has the form \( \chi \left\lbrack {c/x}\right\rbrack \) . Then we have that\n\n\[ \Gamma \cup \{ \exists {x\sigma },\sigma \left\lbrack {c/x}\right\rbrack \} \vDash \chi \left\lbrack {c/x}\right\rbrack ,\]\n\nwhence \( \Gamma ,\exists {x\sigma } \vDash \forall x\left( {\sigma \rightarrow \chi }\right) \) by the Deduction Theorem and Generalization, and finally \( \Gamma \cup \{ \exists {x\sigma }\} \vDash \exists {x\chi } \) . On the other hand, \( \Delta \vDash \neg \chi \left\lbrack {c/x}\right\rbrack \) and hence by Generalization \( \Delta \vDash \neg \exists {x\chi } \) . So \( \Gamma \cup \{ \exists {x\sigma }\} \) and \( \Delta \) are separable, a contradiction.
|
Yes
|
Lemma 26.8. Suppose \( \alpha \in L\left( \mathcal{L}\right) \), with \( \mathcal{L} \) finite, and assume also that there is an \( n \in \mathbb{N} \) such that for any two structures \( \mathfrak{M} \) and \( \mathfrak{N} \), if \( \mathfrak{M}{ \equiv }_{n}\mathfrak{N} \) and \( \mathfrak{M}{ \vDash }_{L}\alpha \) then also \( \mathfrak{N}{ \vDash }_{L}\alpha \) . Then \( \alpha \) is equivalent to a first-order sentence, i.e., there is a first-order \( \theta \) such that \( {\operatorname{Mod}}_{L}\left( \alpha \right) = {\operatorname{Mod}}_{L}\left( \theta \right) \) .
|
Proof. Let \( n \) be such that any two \( n \) -equivalent structures \( \mathfrak{M} \) and \( \mathfrak{N} \) agree on the value assigned to \( \alpha \) . Recall Proposition 23.19: there are only finitely many first-order sentences in a finite language that have quantifier rank no greater than \( n \), up to logical equivalence. Now, for each fixed structure \( \mathfrak{M} \) let \( {\theta }_{\mathfrak{M}} \) be the conjunction of all first-order sentences \( \alpha \) true in \( \mathfrak{M} \) with \( \operatorname{qr}\left( \alpha \right) \leq n \) (this conjunction is finite), so that \( \mathfrak{N} \vDash {\theta }_{\mathfrak{M}} \) if and only if \( \mathfrak{N}{ \equiv }_{n}\mathfrak{M} \) . Then put \( \theta = \) \( \left. {\vee \left\{ {{\theta }_{\mathfrak{M}} : \mathfrak{M}{ \vDash }_{L}\alpha }\right\} \text{; this disjunction is also finite (up to logical equivalence).}}\right\} \n\nThe conclusion \( {\operatorname{Mod}}_{L}\left( \alpha \right) = {\operatorname{Mod}}_{L}\left( \theta \right) \) follows. In fact, if \( \mathfrak{N}{ \vDash }_{L}\theta \) then for some \( \mathfrak{M}{ \vDash }_{L}\alpha \) we have \( \mathfrak{N} \vDash {\theta }_{\mathfrak{M}} \), whence also \( \mathfrak{N}{ \vDash }_{L}\alpha \) (by the hypothesis of the lemma). Conversely, if \( \mathfrak{N}{ \vDash }_{L}\alpha \) then \( {\theta }_{\mathfrak{N}} \) is a disjunct in \( \theta \), and since \( \mathfrak{N} \vDash {\theta }_{\mathfrak{N}} \), also \( \mathfrak{N}{ \vDash }_{L}\theta \) .
|
Yes
|
Proposition 27.4. The addition function \( \operatorname{add}\left( {x, y}\right) = x + y \) is primitive recursive.
|
Proof. We already have a primitive recursive definition of add in terms of two functions \( f \) and \( g \) which matches the format of Definition 27.1:\n\n\[ \operatorname{add}\left( {{x}_{0},0}\right) = f\left( {x}_{0}\right) = {x}_{0} \]\n\n\[ \operatorname{add}\left( {{x}_{0}, y + 1}\right) = g\left( {{x}_{0}, y,\operatorname{add}\left( {{x}_{0}, y}\right) }\right) = \operatorname{succ}\left( {\operatorname{add}\left( {{x}_{0}, y}\right) }\right) \]\n\nSo add is primitive recursive provided \( f \) and \( g \) are as well. \( f\left( {x}_{0}\right) = {x}_{0} = \) \( {P}_{0}^{1}\left( {x}_{0}\right) \), and the projection functions count as primitive recursive, so \( f \) is primitive recursive. The function \( g \) is the three-place function \( g\left( {{x}_{0}, y, z}\right) \) defined by\n\n\[ g\left( {{x}_{0}, y, z}\right) = \operatorname{succ}\left( z\right) . \]\n\nThis does not yet tell us that \( g \) is primitive recursive, since \( g \) and succ are not quite the same function: succ is one-place, and \( g \) has to be three-place. But we can define \( g \) \
|
Yes
|
Proposition 27.5. The multiplication function \( \operatorname{mult}\left( {x, y}\right) = x \cdot y \) is primitive recursive.
|
Proof. Exercise.
|
No
|
Here's our very first example of a primitive recursive definition:\n\n\[ h\left( 0\right) = 1 \]\n\n\[ h\left( {y + 1}\right) = 2 \cdot h\left( y\right) \text{.} \]
|
This function cannot fit into the form required by Definition 27.1, since \( k = 0 \) . The definition also involves the constants 1 and 2 . To get around the first problem, let’s introduce a dummy argument and define the function \( {h}^{\prime } \) :\n\n\[ {h}^{\prime }\left( {{x}_{0},0}\right) = f\left( {x}_{0}\right) = 1 \]\n\n\[ {h}^{\prime }\left( {{x}_{0}, y + 1}\right) = g\left( {{x}_{0}, y,{h}^{\prime }\left( {{x}_{0}, y}\right) }\right) = 2 \cdot {h}^{\prime }\left( {{x}_{0}, y}\right) . \]\n\nThe function \( f\left( {x}_{0}\right) = 1 \) can be defined from succ and zero by composition: \( f\left( {x}_{0}\right) = \operatorname{succ}\left( {\operatorname{zero}\left( {x}_{0}\right) }\right) \) . The function \( g \) can be defined by composition from \( {g}^{\prime }\left( z\right) = 2 \cdot z \) and projections:\n\n\[ g\left( {{x}_{0}, y, z}\right) = {g}^{\prime }\left( {{P}_{2}^{3}\left( {{x}_{0}, y, z}\right) }\right) \]\n\nand \( {g}^{\prime } \) in turn can be defined by composition as\n\n\[ {g}^{\prime }\left( z\right) = \operatorname{mult}\left( {{g}^{\prime \prime }\left( z\right) ,{P}_{0}^{1}\left( z\right) }\right) \]\n\nand\n\n\[ {g}^{\prime \prime }\left( z\right) = \operatorname{succ}\left( {f\left( z\right) }\right) \]\n\nwhere \( f \) is as above: \( f\left( z\right) = \operatorname{succ}\left( {\operatorname{zero}\left( z\right) }\right) \) . Now that we have \( {h}^{\prime } \) we can use composition again to let \( h\left( y\right) = {h}^{\prime }\left( {{P}_{0}^{1}\left( y\right) ,{P}_{0}^{1}\left( y\right) }\right) \) . This shows that \( h \) can be defined from the basic functions using a sequence of compositions and primitive recursions, so \( h \) is primitive recursive.
|
Yes
|
Proposition 27.7. The exponentiation function \( \exp \left( {x, y}\right) = {x}^{y} \) is primitive recursive.
|
Proof. We can define exp primitive recursively as\n\n\[ \exp \left( {x,0}\right) = 1 \]\n\n\[ \exp \left( {x, y + 1}\right) = \operatorname{mult}\left( {x,\exp \left( {x, y}\right) }\right) . \]\n\nStrictly speaking, this is not a recursive definition from primitive recursive functions. Officially, though, we have:\n\n\[ \exp \left( {x,0}\right) = f\left( x\right) \]\n\n\[ \exp \left( {x, y + 1}\right) = g\left( {x, y,\exp \left( {x, y}\right) }\right) . \]\n\nwhere\n\n\[ f\left( x\right) = \operatorname{succ}\left( {\operatorname{zero}\left( x\right) }\right) = 1 \]\n\n\[ g\left( {x, y, z}\right) = \operatorname{mult}\left( {{P}_{0}^{3}\left( {x, y, z}\right) ,{P}_{2}^{3}\left( {x, y, z}\right) }\right) = x \cdot z \]\n\nand so \( f \) and \( g \) are defined from primitive recursive functions by composition.
|
Yes
|
Proposition 27.8. The predecessor function \( \operatorname{pred}\left( y\right) \) defined by\n\n\[ \operatorname{pred}\left( y\right) = \left\{ \begin{array}{ll} 0 & \text{ if }y = 0 \\ y - 1 & \text{ otherwise } \end{array}\right.\n\]\n\nis primitive recursive.
|
Proof. Note that\n\n\[ \operatorname{pred}\left( 0\right) = 0\text{and} \]\n\n\[ \operatorname{pred}\left( {y + 1}\right) = y. \]\n\nThis is almost a primitive recursive definition. It does not, strictly speaking, fit into the pattern of definition by primitive recursion, since that pattern requires at least one extra argument \( x \) . It is also odd in that it does not actually use \( \operatorname{pred}\left( y\right) \) in the definition of \( \operatorname{pred}\left( {y + 1}\right) \) . But we can first define \( {\operatorname{pred}}^{\prime }\left( {x, y}\right) \) by\n\n\[ {\operatorname{pred}}^{\prime }\left( {x,0}\right) = \operatorname{zero}\left( x\right) = 0, \]\n\n\[ {\operatorname{pred}}^{\prime }\left( {x, y + 1}\right) = {P}_{1}^{3}\left( {x, y,{\operatorname{pred}}^{\prime }\left( {x, y}\right) }\right) = y. \]\n\nand then define pred from it by composition, e.g., as \( \operatorname{pred}\left( x\right) = {\operatorname{pred}}^{\prime }\left( {\operatorname{zero}\left( x\right) ,{P}_{0}^{1}\left( x\right) }\right) .▱
|
Yes
|
Proposition 27.9. The factorial function \( \operatorname{fac}\left( x\right) = x! = 1 \cdot 2 \cdot 3\cdots \cdot x \) is primitive recursive.
|
Proof. The obvious primitive recursive definition is\n\n\[ \operatorname{fac}\left( 0\right) = 1 \]\n\n\[ \operatorname{fac}\left( {y + 1}\right) = \operatorname{fac}\left( y\right) \cdot \left( {y + 1}\right) . \]\n\nOfficially, we have to first define a two-place function \( h \)\n\n\[ h\left( {x,0}\right) = {\operatorname{const}}_{1}\left( x\right) \]\n\n\[ h\left( {x, y}\right) = g\left( {x, y, h\left( {x, y}\right) }\right) \]\n\nwhere \( g\left( {x, y, z}\right) = \operatorname{mult}\left( {{P}_{2}^{3}\left( {x, y, z}\right) ,\operatorname{succ}\left( {{P}_{1}^{3}\left( {x, y, z}\right) }\right) }\right) \) and then let\n\n\[ \operatorname{fac}\left( y\right) = h\left( {{P}_{0}^{1}\left( y\right) ,{P}_{0}^{1}\left( y\right) }\right) \]\n\nFrom now on we'll be a bit more laissez-faire and not give the official definitions by composition and primitive recursion.
|
Yes
|
Proposition 27.10. Truncated subtraction, \( x - y \), defined by\n\n\[ x - y = \left\{ \begin{array}{ll} 0 & \text{ if }x > y \\ x - y & \text{ otherwise } \end{array}\right. \]\nis primitive recursive.
|
Proof. We have:\n\n\[ x - 0 = x \]\n\n\[ x - \left( {y + 1}\right) = \operatorname{pred}\left( {x - y}\right) \]
|
Yes
|
Proposition 27.11. The distance between \( x \) and \( y,\left| {x - y}\right| \), is primitive recursive.
|
Proof. We have \( \left| {x - y}\right| = \left( {x - y}\right) + \left( {y - x}\right) \), so the distance can be defined by composition from + and - , which are primitive recursive.
|
Yes
|
Proposition 27.12. The maximum of \( x \) and \( y,\max \left( {x, y}\right) \), is primitive recursive.
|
Proof. We can define \( \max \left( {x, y}\right) \) by composition from + and - by\n\n\[ \max \left( {x, y}\right) = x + \left( {y - x}\right) . \]\n\nIf \( x \) is the maximum, i.e., \( x \geq y \), then \( y - x = 0 \), so \( x + \left( {y - x}\right) = x + 0 = x \) . If \( y \) is the maximum, then \( y - x = y - x \), and so \( x + \left( {y - x}\right) = x + \left( {y - x}\right) = y \) . \( ▱ \)
|
Yes
|
Proposition 27.13. The minimum of \( x \) and \( y,\min \left( {x, y}\right) \), is primitive recursive.
|
Proof. Exercise.
|
No
|
Proposition 27.14. The set of primitive recursive functions is closed under the following two operations:\n\n1. Finite sums: if \( f\left( {\\overrightarrow{x}, z}\\right) \) is primitive recursive, then so is the function\n\n\[ g\\left( {\\overrightarrow{x}, y}\\right) = \\mathop{\\sum }\\limits_{{z = 0}}^{y}f\\left( {\\overrightarrow{x}, z}\\right) \]\n\n2. Finite products: if \( f\left( {\\overrightarrow{x}, z}\\right) \) is primitive recursive, then so is the function\n\n\[ h\\left( {\\overrightarrow{x}, y}\\right) = \\mathop{\\prod }\\limits_{{z = 0}}^{y}f\\left( {\\overrightarrow{x}, z}\\right) \]
|
Proof. For example, finite sums are defined recursively by the equations\n\n\[ g\\left( {\\overrightarrow{x},0}\\right) = f\\left( {\\overrightarrow{x},0}\\right) \]\n\n\[ g\\left( {\\overrightarrow{x}, y + 1}\\right) = g\\left( {\\overrightarrow{x}, y}\\right) + f\\left( {\\overrightarrow{x}, y + 1}\\right) . \]
|
No
|
Proposition 27.16. The set of primitive recursive relations is closed under boolean operations, that is, if \( P\left( \overrightarrow{x}\right) \) and \( Q\left( \overrightarrow{x}\right) \) are primitive recursive, so are\n\n1. \( \neg P\left( \overrightarrow{x}\right) \)\n\n2. \( P\left( \overrightarrow{x}\right) \land Q\left( \overrightarrow{x}\right) \)\n\n3. \( P\left( \overrightarrow{x}\right) \vee Q\left( \overrightarrow{x}\right) \)\n\n4. \( P\left( \overrightarrow{x}\right) \rightarrow Q\left( \overrightarrow{x}\right) \)
|
Proof. Suppose \( P\left( \overrightarrow{x}\right) \) and \( Q\left( \overrightarrow{x}\right) \) are primitive recursive, i.e., their characteristic functions \( {\chi }_{P} \) and \( {\chi }_{Q} \) are. We have to show that the characteristic functions of \( \neg P\left( \overrightarrow{x}\right) \), etc., are also primitive recursive.\n\n\[{\chi }_{\neg P}\left( \overrightarrow{x}\right) = \left\{ \begin{array}{ll} 0 & \text{ if }{\chi }_{P}\left( \overrightarrow{x}\right) = 1 \\ 1 & \text{ otherwise } \end{array}\right.\]\n\nWe can define \( {\chi }_{\neg P}\left( \overrightarrow{x}\right) \) as \( 1 - {\chi }_{P}\left( \overrightarrow{x}\right) \).\n\n\[{\chi }_{P \land Q}\left( \overrightarrow{x}\right) = \left\{ \begin{array}{ll} 1 & \text{ if }{\chi }_{P}\left( \overrightarrow{x}\right) = {\chi }_{Q}\left( \overrightarrow{x}\right) = 1 \\ 0 & \text{ otherwise } \end{array}\right.\]\n\nWe can define \( {\chi }_{P \land Q}\left( \overrightarrow{x}\right) \) as \( {\chi }_{P}\left( \overrightarrow{x}\right) \cdot {\chi }_{Q}\left( \overrightarrow{x}\right) \) or as \( \min \left( {{\chi }_{P}\left( \overrightarrow{x}\right) ,{\chi }_{Q}\left( \overrightarrow{x}\right) }\right) \).\n\nSimilarly, \( {\chi }_{P \vee Q}\left( \overrightarrow{x}\right) = \max \left( {{\chi }_{P}\left( \overrightarrow{x}\right) ,{\chi }_{Q}\left( \overrightarrow{x}\right) }\right) \) and \( {\chi }_{P \rightarrow Q}\left( \overrightarrow{x}\right) = \max \left( {1 - {\chi }_{P}\left( \overrightarrow{x}\right) ,{\chi }_{Q}\left( \overrightarrow{x}\right) }\right) \). \( ▱ \)
|
Yes
|
Proposition 27.17. The set of primitive recursive relations is closed under bounded quantification, i.e., if \( R\left( {\overrightarrow{x}, z}\right) \) is a primitive recursive relation, then so are the relations \( \left( {\forall z < y}\right) R\left( {\overrightarrow{x}, z}\right) \) and \( \left( {\exists z < y}\right) R\left( {\overrightarrow{x}, z}\right) \) .
|
Proof. By convention, we take \( \left( {\forall z < 0}\right) R\left( {\overrightarrow{x}, z}\right) \) to be true (for the trivial reason that there are no \( z \) less than 0 ) and \( \left( {\exists z < 0}\right) R\left( {\overrightarrow{x}, z}\right) \) to be false. A universal quantifier functions just like a finite product or iterated minimum, i.e., if \( P\left( {\overrightarrow{x}, y}\right) \Leftrightarrow \left( {\forall z < y}\right) R\left( {\overrightarrow{x}, z}\right) \) then \( {\chi }_{P}\left( {\overrightarrow{x}, y}\right) \) can be defined by\n\n\[ \n{\chi }_{P}\left( {\overrightarrow{x},0}\right) = 1 \n\]\n\n\[ \n{\chi }_{P}\left( {\overrightarrow{x}, y + 1}\right) = \min \left( {{\chi }_{P}\left( {\overrightarrow{x}, y}\right) ,{\chi }_{R}\left( {\overrightarrow{x}, y}\right) }\right) ). \n\]\n\nBounded existential quantification can similarly be defined using max. Alternatively, it can be defined from bounded universal quantification, using the equivalence \( \left( {\exists z < y}\right) R\left( {\overrightarrow{x}, z}\right) \leftrightarrow \neg \left( {\forall z < y}\right) \neg R\left( {\overrightarrow{x}, z}\right) \) . Note that, for example, a bounded quantifier of the form \( \left( {\exists x \leq y}\right) \ldots x\ldots \) is equivalent to \( \left( {\exists x < y + 1}\right) \ldots x\ldots \)
|
Yes
|
If \( {g}_{0}\left( \overrightarrow{x}\right) ,\ldots ,{g}_{m}\left( \overrightarrow{x}\right) \) are primitive recursive functions, and \( {R}_{0}\left( \overrightarrow{x}\right) \) , \( \ldots ,{R}_{m - 1}\left( \overrightarrow{x}\right) \) are primitive recursive relations, then the function \( f \) defined by\n\n\[ f\left( \overrightarrow{x}\right) = \left\{ \begin{array}{ll} {g}_{0}\left( \overrightarrow{x}\right) & \text{ if }{R}_{0}\left( \overrightarrow{x}\right) \\ {g}_{1}\left( \overrightarrow{x}\right) & \text{ if }{R}_{1}\left( \overrightarrow{x}\right) \text{ and not }{R}_{0}\left( \overrightarrow{x}\right) \\ \vdots & \\ {g}_{m - 1}\left( \overrightarrow{x}\right) & \text{ if }{R}_{m - 1}\left( \overrightarrow{x}\right) \text{ and none of the previous hold } \\ {g}_{m}\left( \overrightarrow{x}\right) & \text{ otherwise } \end{array}\right. \] is also primitive recursive.
|
Proof. When \( m = 1 \), this is just the function defined by\n\n\[ f\left( \overrightarrow{x}\right) = \operatorname{cond}\left( {{\chi }_{\neg {R}_{0}}\left( \overrightarrow{x}\right) ,{g}_{0}\left( \overrightarrow{x}\right) ,{g}_{1}\left( \overrightarrow{x}\right) }\right) .\n\nFor \( m \) greater than 1, one can just compose definitions of this form.
|
Yes
|
Proposition 27.19. If \( R\left( {\overrightarrow{x}, z}\right) \) is primitive recursive, so is the function \( {m}_{R}\left( {\overrightarrow{x}, y}\right) \) which returns the least \( z \) less than \( y \) such that \( R\left( {\overrightarrow{x}, z}\right) \) holds, if there is one, and \( y \) otherwise. We will write the function \( {m}_{R} \) as\n\n\[ \left( {\min z < y}\right) R\left( {\overrightarrow{x}, z}\right) \]
|
Proof. Note than there can be no \( z < 0 \) such that \( R\left( {\overrightarrow{x}, z}\right) \) since there is no \( z < 0 \) at all. So \( {m}_{R}\left( {\overrightarrow{x},0}\right) = 0 \) .\n\nIn case the bound is of the form \( y + 1 \) we have three cases: (a) There is a \( z < y \) such that \( R\left( {\overrightarrow{x}, z}\right) \), in which case \( {m}_{R}\left( {\overrightarrow{x}, y + 1}\right) = {m}_{R}\left( {\overrightarrow{x}, y}\right) \) . (b) There is no such \( z < y \) but \( R\left( {\overrightarrow{x}, y}\right) \) holds, then \( {m}_{R}\left( {\overrightarrow{x}, y + 1}\right) = y \) . (c) There is no \( z < y + 1 \) such that \( R\left( {\overrightarrow{x}, z}\right) \), then \( {m}_{R}\left( {\overrightarrow{z}, y + 1}\right) = y + 1 \) . So,\n\n\[ {m}_{R}\left( {\overrightarrow{x},0}\right) = 0 \]\n\n\[ {m}_{R}\left( {\overrightarrow{x}, y + 1}\right) = \left\{ \begin{array}{ll} {m}_{R}\left( {\overrightarrow{x}, y}\right) & \text{ if }{m}_{R}\left( {\overrightarrow{x}, y}\right) \neq y \\ y & \text{ if }{m}_{R}\left( {\overrightarrow{x}, y}\right) = y\text{ and }R\left( {\overrightarrow{x}, y}\right) \\ y + 1 & \text{ otherwise. } \end{array}\right. \]\n\nNote that there is a \( z < y \) such that \( R\left( {\overrightarrow{x}, z}\right) \) iff \( {m}_{R}\left( {\overrightarrow{x}, y}\right) \neq y \) .
|
Yes
|
Proposition 27.20. The function \( \operatorname{len}\left( s\right) \), which returns the length of the sequence \( s \) , is primitive recursive.
|
Proof. Let \( R\left( {i, s}\right) \) be the relation defined by\n\n\[ R\left( {i, s}\right) \text{ iff }{p}_{i} \mid s \land {p}_{i + 1} \nmid s \]\n\n\( R \) is clearly primitive recursive. Whenever \( s \) is the code of a non-empty sequence, i.e.,\n\n\[ s = {p}_{0}^{{a}_{0} + 1}\cdots \cdot \cdot {p}_{k}^{{a}_{k} + 1}, \]\n\n\( R\left( {i, s}\right) \) holds if \( {p}_{i} \) is the largest prime such that \( {p}_{i} \mid s \), i.e., \( i = k \) . The length of \( s \) thus is \( i + 1 \) iff \( {p}_{i} \) is the largest prime that divides \( s \), so we can let\n\n\[ \operatorname{len}\left( s\right) = \left\{ \begin{array}{ll} 0 & \text{ if }s = 0\text{ or }s = 1 \\ 1 + \left( {\min i < s}\right) R\left( {i, s}\right) & \text{ otherwise } \end{array}\right. \]\n\nWe can use bounded minimization, since there is only one \( i \) that satisfies \( R\left( {s, i}\right) \) when \( s \) is a code of a sequence, and if \( i \) exists it is less than \( s \) itself.
|
Yes
|
Proposition 27.21. The function append \( \left( {s, a}\right) \), which returns the result of appending a to the sequence \( s \), is primitive recursive.
|
Proof. append can be defined by:\n\n\[ \operatorname{append}\left( {s, a}\right) = \left\{ \begin{array}{ll} {2}^{a + 1} & \text{ if }s = 0\text{ or }s = 1 \\ s \cdot {p}_{\operatorname{len}\left( s\right) }^{a + 1} & \text{ otherwise. } \end{array}\right. \]
|
Yes
|
Proposition 27.22. The function \( \operatorname{element}\left( {s, i}\right) \), which returns the ith element of \( s \) (where the initial element is called the 0th), or 0 if \( i \) is greater than or equal to the length of \( s \), is primitive recursive.
|
Proof. Note that \( a \) is the \( i \) th element of \( s \) iff \( {p}_{i}^{a + 1} \) is the largest power of \( {p}_{i} \) that divides \( s \), i.e., \( {p}_{i}^{a + 1} \mid s \) but \( {p}_{i}^{a + 2} \nmid s \) . So:\n\n\[ \text{ element }\left( {s, i}\right) = \left\{ \begin{array}{ll} 0 & \text{ if }i \geq \operatorname{len}\left( s\right) \\ \left( {\min a < s}\right) \left( {{p}_{i}^{a + 2} \nmid s}\right) & \text{ otherwise. } \end{array}\right. \]
|
Yes
|
Proposition 27.23. The function \( \operatorname{concat}\left( {s, t}\right) \), which concatenates two sequences, is primitive recursive.
|
Proof. We want a function concat with the property that\n\n\[ \operatorname{concat}\left( {\left\langle {{a}_{0},\ldots ,{a}_{k}}\right\rangle ,\left\langle {{b}_{0},\ldots ,{b}_{l}}\right\rangle }\right) = \left\langle {{a}_{0},\ldots ,{a}_{k},{b}_{0},\ldots ,{b}_{l}}\right\rangle . \]\n\nWe’ll use a \
|
No
|
Proposition 27.24. The function subseq \( \left( {s, i, n}\right) \) which returns the subsequence of \( s \) of length \( n \) beginning at the ith element, is primitive recursive.
|
Proof. Exercise.
|
No
|
Proposition 27.25. The function SubtreeSeq \( \left( t\right) \), which returns the code of a sequence the elements of which are the codes of all subtrees of the tree with code \( t \), is primitive recursive.
|
Proof. First note that ISubtrees \( \left( t\right) = \operatorname{subseq}\left( {t,1,{\left( t\right) }_{0}}\right) \) is primitive recursive and returns the codes of the immediate subtrees of a tree \( t \) . Now we can define a helper function hSubtreeSeq \( \left( {t, n}\right) \) which computes the sequence of all subtrees which are \( n \) nodes removed from the root. The sequence of subtrees of \( t \) which is 0 nodes removed from the root-in other words, begins at the root of \( t \) -is the sequence consisting just of \( t \) . To obtain a sequence of all level \( n + 1 \) subtrees of \( t \), we concatenate the level \( n \) subtrees with a sequence consisting of all immediate subtrees of the level \( n \) subtrees. To get a list of all these, note that if \( f\left( x\right) \) is a primitive recursive function returning codes of sequences, then \( {g}_{f}\left( {s, k}\right) = f\left( {\left( s\right) }_{0}\right) \frown \ldots \frown f\left( {\left( s\right) }_{k}\right) \) is also primitive recursive:\n\n\[ g\left( {s,0}\right) = f\left( {\left( s\right) }_{0}\right) \]\n\n\[ g\left( {s, k + 1}\right) = g\left( {s, k}\right) \frown f\left( {\left( s\right) }_{k + 1}\right) \]\n\nFor instance, if \( s \) is a sequence of trees, then \( h\left( s\right) = {g}_{\text{ISubtrees }}\left( {s,\operatorname{len}\left( s\right) }\right) \) gives the sequence of the immediate subtrees of the elements of \( s \) . We can use it to define hSubtreeSeq by\n\n\[ \text{hSubtreeSeq}\left( {t,0}\right) = \langle t\rangle \]\n\n\[ \text{hSubtreeSeq}\left( {t, n + 1}\right) = \text{hSubtreeSeq}\left( {t, n}\right) \frown h\left( {\text{hSubtree}\left( {t, n}\right) }\right) \text{.} \]\n\nThe maximum level of subtrees in a tree coded by \( t \), i.e., the maximum distance between the root and a leaf node, is bounded by the code \( t \) . So a sequence of codes of all subtrees of the tree coded by \( t \) is given by hSubtreeSeq \( \left( {t, t}\right) .▱ \)
|
Yes
|
Theorem 27.28 (Kleene's Normal Form Theorem). There is a primitive recursive relation \( T\left( {e, x, s}\right) \) and a primitive recursive function \( U\left( s\right) \), with the following property: if \( f \) is any partial recursive function, then for some \( e \) , \[ f\left( x\right) \simeq U\left( {{\mu sT}\left( {e, x, s}\right) }\right) \] for every \( x \) .
|
The proof of the normal form theorem is involved, but the basic idea is simple. Every partial recursive function has an index \( e \), intuitively, a number coding its program or definition. If \( f\left( x\right) \downarrow \), the computation can be recorded systematically and coded by some number \( s \), and that \( s \) codes the computation of \( f \) on input \( x \) can be checked primitive recursively using only \( x \) and the definition \( e \) . This means that \( T \) is primitive recursive. Given the full record of the computation \( s \), the \
|
No
|
Theorem 27.29. The halting function \( h \) is not partial recursive.
|
Proof. If \( h \) were partial recursive, we could define\n\n\[ d\left( y\right) = \left\{ \begin{array}{ll} 1 & \text{ if }h\left( {y, y}\right) = 0 \\ {\mu xx} \neq x & \text{ otherwise. } \end{array}\right. \]\n\nFrom this definition it follows that\n\n1. \( d\left( y\right) \downarrow \) iff \( {\varphi }_{y}\left( y\right) \uparrow \) or \( y \) is not the index of a partial recursive function.\n\n2. \( d\left( y\right) \uparrow \) iff \( {\varphi }_{y}\left( y\right) \downarrow \) .\n\nIf \( h \) were partial recursive, then \( d \) would be partial recursive as well. Thus, by the Kleene normal form theorem, it has an index \( {e}_{d} \) . Consider the value of \( h\left( {{e}_{d},{e}_{d}}\right) \) . There are two possible cases,0 and 1 .\n\n1. If \( h\left( {{e}_{d},{e}_{d}}\right) = 1 \) then \( {\varphi }_{{e}_{d}}\left( {e}_{d}\right) \downarrow \) . But \( {\varphi }_{{e}_{d}} \simeq d \), and \( d\left( {e}_{d}\right) \) is defined iff \( h\left( {{e}_{d},{e}_{d}}\right) = 0 \) . So \( h\left( {{e}_{d},{e}_{d}}\right) \neq 1 \).\n\n2. If \( h\left( {{e}_{d},{e}_{d}}\right) = 0 \) then either \( {e}_{d} \) is not the index of a partial recursive function, or it is and \( {\varphi }_{{e}_{d}}\left( {e}_{d}\right) \uparrow \) . But again, \( {\varphi }_{{e}_{d}} \simeq d \), and \( d\left( {e}_{d}\right) \) is undefined iff \( {\varphi }_{{e}_{d}}\left( {e}_{d}\right) \downarrow \) .\n\nThe upshot is that \( {e}_{d} \) cannot, after all, be the index of a partial recursive function. But if \( h \) were partial recursive, \( d \) would be too, and so our definition of \( {e}_{d} \) as an index of it would be admissible. We must conclude that \( h \) cannot be partial recursive.
|
Yes
|
Theorem 28.1 (Kleene's Normal Form Theorem). There are a primitive recursive relation \( T\left( {k, x, s}\right) \) and a primitive recursive function \( U\left( s\right) \), with the following property: if \( f \) is any partial computable function, then for some \( k \) ,\n\n\[ f\left( x\right) \simeq U\left( {{\mu sT}\left( {k, x, s}\right) }\right) \]\n\nfor every \( x \) .
|
Proof Sketch. For any model of computation one can rigorously define a description of the computable function \( f \) and code such description using a natural number \( k \) . One can also rigorously define a notion of \
|
No
|
Theorem 28.2. Every partial computable function has infinitely many indices.
|
Again, this is intuitively clear. Given any (description of) a computable function, one can come up with a different description which computes the same function (input-output pair) but does so, e.g., by first doing something that has no effect on the computation (say, test if \( 0 = 0 \), or count to 5, etc.). The index of the altered description will always be different from the original index. Both are indices of the same function, just computed slightly differently.
|
No
|
Theorem 28.3. For each pair of natural numbers \( n \) and \( m \), there is a primitive recursive function \( {s}_{n}^{m} \) such that for every sequence \( x,{a}_{0},\ldots ,{a}_{m - 1},{y}_{0},\ldots ,{y}_{n - 1} \), we have\n\n\[{\varphi }_{{s}_{n}^{m}\left( {x,{a}_{0},\ldots ,{a}_{m - 1}}\right) }^{n}\left( {{y}_{0},\ldots ,{y}_{n - 1}}\right) \simeq {\varphi }_{x}^{m + n}\left( {{a}_{0},\ldots ,{a}_{m - 1},{y}_{0},\ldots ,{y}_{n - 1}}\right) .\]
|
It is helpful to think of \( {s}_{n}^{m} \) as acting on programs. That is, \( {s}_{n}^{m} \) takes a program, \( x \), for an \( \left( {m + n}\right) \) -ary function, as well as fixed inputs \( {a}_{0},\ldots ,{a}_{m - 1} \) ; and it returns a program, \( {s}_{n}^{m}\left( {x,{a}_{0},\ldots ,{a}_{m - 1}}\right) \), for the \( n \) -ary function of the remaining arguments. It you think of \( x \) as the description of a Turing machine, then \( {s}_{n}^{m}\left( {x,{a}_{0},\ldots ,{a}_{m - 1}}\right) \) is the Turing machine that, on input \( {y}_{0},\ldots ,{y}_{n - 1} \), prepends \( {a}_{0},\ldots ,{a}_{m - 1} \) to the input string, and runs \( x \) . Each \( {s}_{n}^{m} \) is then just a primitive recursive function that finds a code for the appropriate Turing machine.
|
Yes
|
Theorem 28.4. There is a universal partial computable function \( \operatorname{Un}\left( {k, x}\right) \) . In other words, there is a function \( \operatorname{Un}\left( {k, x}\right) \) such that:\n\n1. \( \operatorname{Un}\left( {k, x}\right) \) is partial computable.\n\n2. If \( f\left( x\right) \) is any partial computable function, then there is a natural number \( k \) such that \( f\left( x\right) \simeq \operatorname{Un}\left( {k, x}\right) \) for every \( x \) .
|
Proof. Let \( \operatorname{Un}\left( {k, x}\right) \simeq U\left( {{\mu sT}\left( {k, x, s}\right) }\right) \) in Kleene’s normal form theorem.
|
Yes
|
Theorem 28.5. There is no universal computable function. In other words, the universal function \( {\operatorname{Un}}^{\prime }\left( {k, x}\right) = {\varphi }_{k}\left( x\right) \) is not computable.
|
Proof. This theorem says that there is no total computable function that is universal for the total computable functions. The proof is a simple diagonalization: if \( {\operatorname{Un}}^{\prime }\left( {k, x}\right) \) were total and computable, then\n\n\[ d\left( x\right) = {\operatorname{Un}}^{\prime }\left( {x, x}\right) + 1 \]\n\nwould also be total and computable. However, for every \( k, d\left( k\right) \) is not equal to \( {\operatorname{Un}}^{\prime }\left( {k, k}\right) \) .
|
Yes
|
Theorem 28.6. Let\n\n\\[ \nh\\left( {k, x}\\right) = \\left\\{ \\begin{array}{ll} 1 & \\text{ if }\\operatorname{Un}\\left( {k, x}\\right) \\text{ is defined } \\\\ 0 & \\text{ otherwise. } \\end{array}\\right.\n\\]\n\nThen \\( h \\) is not computable.
|
Proof. If \\( h \\) were computable, we would have a universal computable function, as follows. Suppose \\( h \\) is computable, and define\n\n\\[ \n{\\operatorname{Un}}^{\\prime }\\left( {k, x}\\right) = \\left\\{ \\begin{array}{ll} \\operatorname{fnUn}\\left( {k, x}\\right) & \\text{ if }h\\left( {k, x}\\right) = 1 \\\\ 0 & \\text{ otherwise. } \\end{array}\\right.\n\\]\n\nBut now \\( {\\operatorname{Un}}^{\\prime }\\left( {k, x}\\right) \\) is a total function, and is computable if \\( h \\) is. For instance, we could define \\( g \\) using primitive recursion, by\n\n\\[ \ng\\left( {0, k, x}\\right) \\simeq 0\n\\]\n\n\\[ \ng\\left( {y + 1, k, x}\\right) \\simeq \\operatorname{Un}\\left( {k, x}\\right)\n\\]\n\nthen\n\n\\[ \n{\\operatorname{Un}}^{\\prime }\\left( {k, x}\\right) \\simeq g\\left( {h\\left( {k, x}\\right), k, x}\\right)\n\\]\n\nAnd since \\( {\\operatorname{Un}}^{\\prime }\\left( {k, x}\\right) \\) agrees with \\( \\operatorname{Un}\\left( {k, x}\\right) \\) wherever the latter is defined, \\( {\\operatorname{Un}}^{\\prime } \\) is universal for those partial computable functions that happen to be total. But this contradicts Theorem 28.5.
|
Yes
|
Theorem 28.9. Let \( S \) be a set of natural numbers. Then the following are equivalent:\n\n1. \( S \) is computably enumerable.\n\n2. \( S \) is the range of a partial computable function.\n\n3. \( S \) is empty or the range of a primitive recursive function.\n\n4. \( S \) is the domain of a partial computable function.
|
Proof. Since every primitive recursive function is computable and every computable function is partial computable, (3) implies (1) and (1) implies (2). (Note that if \( S \) is empty, \( S \) is the range of the partial computable function that is nowhere defined.) If we show that (2) implies (3), we will have shown the first three clauses equivalent.\n\nSo, suppose \( S \) is the range of the partial computable function \( {\varphi }_{e} \) . If \( S \) is empty, we are done. Otherwise, let \( a \) be any element of \( S \) . By Kleene’s normal form theorem, we can write\n\n\[{\varphi }_{e}\left( x\right) = U\left( {{\mu sT}\left( {e, x, s}\right) }\right) .\]\n\nIn particular, \( {\varphi }_{e}\left( x\right) \downarrow \) and \( = y \) if and only if there is an \( s \) such that \( T\left( {e, x, s}\right) \) and \( U\left( s\right) = y \) . Define \( f\left( z\right) \) by\n\n\[f\left( z\right) = \left\{ \begin{array}{ll} U\left( {\left( z\right) }_{1}\right) & \text{ if }T\left( {e,{\left( z\right) }_{0},{\left( z\right) }_{1}}\right) \\ a & \text{ otherwise. } \end{array}\right.\]\n\nThen \( f \) is primitive recursive, because \( T \) and \( U \) are. Expressed in terms of Turing machines, if \( z \) codes a pair \( \left\langle {{\left( z\right) }_{0},{\left( z\right) }_{1}}\right\rangle \) such that \( {\left( z\right) }_{1} \) is a halting computation of machine \( e \) on input \( {\left( z\right) }_{0} \), then \( f \) returns the output of the computation; otherwise, it returns \( a \) .We need to show that \( S \) is the range of \( f \), i.e., for any natural number \( y, y \in S \) if and only if it is in the range of \( f \) . In the forwards direction, suppose \( y \in S \) . Then \( y \) is in the range of \( {\varphi }_{e} \), so for some \( x \) and \( s \) , \( T\left( {e, x, s}\right) \) and \( U\left( s\right) = y \) ; but then \( y = f\left( {\langle x, s\rangle }\right) \) . Conversely, suppose \( y \) is in the range of \( f \) . Then either \( y = a \), or for some \( z, T\left( {e,{\left( z\right) }_{0},{\left( z\right) }_{1}}\right) \) and \( U\left( {\left( z\right) }_{1}\right) = y \) . Since, in the latter case, \( {\varphi }_{e}\left( x\right) \downarrow = y \), either way, \( y \) is in \( S \) .\n\n(The notation \( {\varphi }_{e}\left( x\right) \downarrow = y \) means \
|
Yes
|
Theorem 28.10. A set \( S \) is computably enumerable if and only if there is a computable relation \( R\left( {x, y}\right) \) such that\n\n\[ S = \{ x : \exists {yR}\left( {x, y}\right) \} . \]
|
Proof. In the forward direction, suppose \( S \) is computably enumerable. Then for some \( e, S = {W}_{e} \) . For this value of \( e \) we can write \( S \) as\n\n\[ S = \{ x : \exists {yT}\left( {e, x, y}\right) \} . \]\n\nIn the reverse direction, suppose \( S = \{ x : \exists {yR}\left( {x, y}\right) \} \) . Define \( f \) by\n\n\[ f\left( x\right) \simeq {\mu y}\text{AtomR}x, y\text{.} \]\n\nThen \( f \) is partial computable, and \( S \) is the domain of \( f \) .
|
Yes
|
Theorem 28.11. Suppose \( A \) and \( B \) are computably enumerable. Then so are \( A \cap B \) and \( A \cup B \) .
|
Proof. Theorem 28.9 allows us to use various characterizations of the computably enumerable sets. By way of illustration, we will provide a few different proofs.\n\nFor the first proof, suppose \( A \) is enumerated by a computable function \( f \) , and \( B \) is enumerated by a computable function \( g \) . Let\n\n\[ h\left( x\right) = {\mu y}\left( {f\left( y\right) = x \vee g\left( y\right) = x}\right) \text{and} \]\n\n\[ j\left( x\right) = {\mu y}\left( {f\left( {\left( y\right) }_{0}\right) = x \land g\left( {\left( y\right) }_{1}\right) = x}\right) . \]\n\nThen \( A \cup B \) is the domain of \( h \), and \( A \cap B \) is the domain of \( j \) .\n\nHere is what is going on, in computational terms: given procedures that enumerate \( A \) and \( B \), we can semi-decide if an element \( x \) is in \( A \cup B \) by looking for \( x \) in either enumeration; and we can semi-decide if an element \( x \) is in \( A \cap B \) for looking for \( x \) in both enumerations at the same time.
|
Yes
|
Theorem 28.12. Let \( A \) be any set of natural numbers. Then \( A \) is computable if and only if both \( A \) and \( \bar{A} \) are computably enumerable.
|
Proof. The forwards direction is easy: if \( A \) is computable, then \( \bar{A} \) is computable as well \( \left( {{\chi }_{A} = 1 - {\chi }_{\bar{A}}}\right) \), and so both are computably enumerable.\n\nIn the other direction, suppose \( A \) and \( \bar{A} \) are both computably enumerable. Let \( A \) be the domain of \( {\varphi }_{d} \), and let \( \bar{A} \) be the domain of \( {\varphi }_{e} \) . Define \( h \) by\n\n\[ h\left( x\right) = {\mu s}\left( {T\left( {d, x, s}\right) \vee T\left( {e, x, s}\right) }\right) . \]\n\nIn other words, on input \( x, h \) searches for either a halting computation of \( {\varphi }_{d} \) or a halting computation of \( {\varphi }_{e} \) . Now, if \( x \in A \), it will succeed in the first case, and if \( x \in \bar{A} \), it will succeed in the second case. So, \( h \) is a total computable function. But now we have that for every \( x, x \in A \) if and only if \( T\left( {e, x, h\left( x\right) }\right) \) , i.e., if \( {\varphi }_{e} \) is the one that is defined. Since \( T\left( {e, x, h\left( x\right) }\right) \) is a computable relation, \( A \) is computable.\n\nIt is easier to understand what is going on in informal computational terms: to decide \( A \), on input \( x \) search for halting computations of \( {\varphi }_{e} \) and \( {\varphi }_{f} \) . One of them is bound to halt; if it is \( {\varphi }_{e} \), then \( x \) is in \( A \), and otherwise, \( x \) is in \( \bar{A} \).
|
Yes
|
Corollary 28.13. \( \overline{{K}_{0}} \) is not computably enumerable.
|
Proof. We know that \( {K}_{0} \) is computably enumerable, but not computable. If \( \overline{{K}_{0}} \) were computably enumerable, then \( {K}_{0} \) would be computable by Theorem 28.12.
|
Yes
|
Proposition 28.15. If \( A{ \leq }_{m}B \) and \( B{ \leq }_{m}C \), then \( A{ \leq }_{m}C \) .
|
Proof. Composing a reduction of \( A \) to \( B \) with a reduction of \( B \) to \( C \) yields a reduction of \( A \) to \( C \) . (You should check the details!)
|
No
|
Proposition 28.16. Let \( A \) and \( B \) be any sets, and suppose \( A \) is many-one reducible to \( B \). 1. If \( B \) is computably enumerable, so is \( A \). 2. If \( B \) is computable, so is \( A \).
|
Proof. Let \( f \) be a many-one reduction from \( A \) to \( B \) . For the first claim, just check that if \( B \) is the domain of a partial function \( g \), then \( A \) is the domain of \( g \circ f \) : \[ x \in A\text{iff}f\left( x\right) \in B \] \[ \text{iff}g\left( {f\left( x\right) }\right) \downarrow \text{.} \] For the second claim, remember that if \( B \) is computable then \( B \) and \( \bar{B} \) are computably enumerable. It is not hard to check that \( f \) is also a many-one reduction of \( \bar{A} \) to \( \bar{B} \), so, by the first part of this proof, \( A \) and \( \bar{A} \) are computably enumerable. So \( A \) is computable as well. (Alternatively, you can check that \( \left. {{\chi }_{A} = {\chi }_{B} \circ f\text{; so if}{\chi }_{B}\text{is computable, then so is}{\chi }_{A}\text{.}}\right) \)
|
Yes
|
Theorem 28.18. \( K,{K}_{0} \), and \( {K}_{1} \) are all complete computably enumerable sets.
|
Proof. To see that \( {K}_{0} \) is complete, let \( B \) be any computably enumerable set. Then for some index \( e \) ,\n\n\[ B = {W}_{e} = \left\{ {x : {\varphi }_{e}\left( x\right) \downarrow }\right\} \]\n\nLet \( f \) be the function \( f\left( x\right) = \langle e, x\rangle \) . Then for every natural number \( x, x \in B \) if and only if \( f\left( x\right) \in {K}_{0} \) . In other words, \( f \) reduces \( B \) to \( {K}_{0} \) .\n\nTo see that \( {K}_{1} \) is complete, note that in the proof of Proposition 28.19 we reduced \( {K}_{0} \) to it. So, by Proposition 28.15, any computably enumerable set can be reduced to \( {K}_{1} \) as well.\n\n\( K \) can be reduced to \( {K}_{0} \) in much the same way.
|
Yes
|
Proposition 28.19. Let\n\n\[ \n{K}_{1} = \left\{ {e : {\varphi }_{e}\left( 0\right) \downarrow }\right\} \n\]\n\nThen \( {K}_{1} \) is computably enumerable but not computable.
|
Proof. Since \( {K}_{1} = \{ e : \exists {sT}\left( {e,0, s}\right) \} ,{K}_{1} \) is computably enumerable by Theorem 28.10.\n\nTo show that \( {K}_{1} \) is not computable, let us show that \( {K}_{0} \) is reducible to it.\n\nThis is a little bit tricky, since using \( {K}_{1} \) we can only ask questions about computations that start with a particular input, 0 . Suppose you have a smart friend who can answer questions of this type (friends like this are known as \
|
No
|
Proposition 28.20. Tot is not computable.
|
Proof. To see that Tot is not computable, it suffices to show that \( K \) is reducible to it. Let \( h\left( {x, y}\right) \) be defined by\n\n\[ h\left( {x, y}\right) \simeq \left\{ \begin{array}{ll} 0 & \text{ if }x \in K \\ \text{ undefined } & \text{ otherwise } \end{array}\right. \]\n\nNote that \( h\left( {x, y}\right) \) does not depend on \( y \) at all. It should not be hard to see that \( h \) is partial computable: on input \( x, y \), the we compute \( h \) by first simulating the function \( {\varphi }_{x} \) on input \( x \) ; if this computation halts, \( h\left( {x, y}\right) \) outputs 0 and halts. So \( h\left( {x, y}\right) \) is just \( Z\left( {{\mu sT}\left( {x, x, s}\right) }\right) \), where \( Z \) is the constant zero function.\n\nUsing the \( s \) - \( m \) - \( n \) theorem, there is a primitive recursive function \( k\left( x\right) \) such that for every \( x \) and \( y \) ,\n\n\[ {\varphi }_{k\left( x\right) }\left( y\right) = \left\{ \begin{array}{ll} 0 & \text{ if }x \in K \\ \text{ undefined } & \text{ otherwise } \end{array}\right. \]\n\nSo \( {\varphi }_{k\left( x\right) } \) is total if \( x \in K \), and undefined otherwise. Thus, \( k \) is a reduction of \( K \) to Tot.
|
Yes
|
Theorem 28.21 (Rice’s Theorem). Let \( C \) be any set of partial computable functions, and let \( A = \left\{ {n : {\varphi }_{n} \in C}\right\} \) . If \( A \) is computable, then either \( C \) is \( \varnothing \) or \( C \) is the set of all the partial computable functions.
|
Proof of Rice’s theorem. Suppose \( C \) is neither \( \varnothing \) nor the set of all the partial computable functions, and let \( A \) be the set of indices of functions in \( C \) . We will show that if \( A \) were computable, we could solve the halting problem; so \( A \) is not computable.\n\nWithout loss of generality, we can assume that the function \( f \) which is nowhere defined is not in \( C \) (otherwise, switch \( C \) and its complement in the argument below). Let \( g \) be any function in \( C \) . The idea is that if we could decide \( A \), we could tell the difference between indices computing \( f \), and indices computing \( g \) ; and then we could use that capability to solve the halting problem.\n\nHere's how. Using the universal computation predicate, we can define a function\n\n\[ h\left( {x, y}\right) \simeq \left\{ \begin{array}{ll} \text{ undefined } & \text{ if }{\varphi }_{x}\left( x\right) \uparrow \\ g\left( y\right) & \text{ otherwise. } \end{array}\right. \]\n\nTo compute \( h \), first we try to compute \( {\varphi }_{x}\left( x\right) \) ; if that computation halts, we go on to compute \( g\left( y\right) \) ; and if that computation halts, we return the output. More formally, we can write\n\n\[ h\left( {x, y}\right) \simeq {P}_{0}^{2}\left( {g\left( y\right) ,\operatorname{Un}\left( {x, x}\right) }\right) . \]\n\nwhere \( {P}_{0}^{2}\left( {{z}_{0},{z}_{1}}\right) = {z}_{0} \) is the 2-place projection function returning the 0-th argument, which is computable.\n\nThen \( h \) is a composition of partial computable functions, and the right side is defined and equal to \( g\left( y\right) \) just when \( \operatorname{Un}\left( {x, x}\right) \) and \( g\left( y\right) \) are both defined.\n\nNotice that for a fixed \( x \), if \( {\varphi }_{x}\left( x\right) \) is undefined, then \( h\left( {x, y}\right) \) is undefined for every \( y \) ; and if \( {\varphi }_{x}\left( x\right) \) is defined, then \( h\left( {x, y}\right) \simeq g\left( y\right) \) . So, for any fixed value of \( x \), either \( h\left( {x, y}\right) \) acts just like \( f \) or it acts just like \( g \), and deciding whether or not \( {\varphi }_{x}\left( x\right) \) is defined amounts to deciding which of these two cases holds. But this amounts to deciding whether or not \( {h}_{x}\left( y\right) \simeq h\left( {x, y}\right) \) is in \( C \) or not, and if \( A \) were computable, we could do just that.
|
Yes
|
is there a computable function \( h \), with the following property? For every \( x \) and \( y \) ,\n\n\[ h\left( {\ulcorner {\varphi }_{x}\left( y\right) \urcorner }\right) = \left\{ \begin{array}{ll} 1 & \text{ if }{\varphi }_{x}\left( y\right) \downarrow \\ 0 & \text{ otherwise. } \end{array}\right. \]
|
No; otherwise, the partial function\n\n\[ g\left( x\right) \simeq \left\{ \begin{array}{ll} 0 & \text{ if }h\left( {\ulcorner {\varphi }_{x}\left( x\right) \urcorner }\right) = 0 \\ \text{ undefined } & \text{ otherwise } \end{array}\right. \]\n\nwould be computable, and so have some index \( e \) . But then we have\n\n\[ {\varphi }_{e}\left( e\right) \simeq \left\{ \begin{array}{ll} 0 & \text{ if }h\left( {\ulcorner {\varphi }_{e}\left( e\right) \urcorner }\right) = 0 \\ \text{ undefined } & \text{ otherwise,} \end{array}\right. \]\n\nin which case \( {\varphi }_{e}\left( e\right) \) is defined if and only if it isn’t, a contradiction.
|
Yes
|
Lemma 28.23. The following statements are equivalent:\n\n1. For every partial computable function \( g\left( {x, y}\right) \), there is an index \( e \) such that for every \( y \), \n\n\[ \n{\varphi }_{e}\left( y\right) \simeq g\left( {e, y}\right) \n\] \n\n2. For every computable function \( f\left( x\right) \), there is an index \( e \) such that for every \( y \), \n\n\[ \n{\varphi }_{e}\left( y\right) \simeq {\varphi }_{f\left( e\right) }\left( y\right) \n\]
|
Proof. \( \left( 1\right) \Rightarrow \left( 2\right) \) : Given \( f \), define \( g \) by \( g\left( {x, y}\right) \simeq \operatorname{Un}\left( {f\left( x\right), y}\right) \) . Use (1) to get an index \( e \) such that for every \( y \), \n\n\[ \n{\varphi }_{e}\left( y\right) = \operatorname{Un}\left( {f\left( e\right), y}\right) \n\] \n\n\[ \n= {\varphi }_{f\left( e\right) }\left( y\right) \text{.} \n\] \n\n\n\( \left( 2\right) \Rightarrow \left( 1\right) \) : Given \( g \), use the \( s - m - n \) theorem to get \( f \) such that for every \( x \) and \( y,{\varphi }_{f\left( x\right) }\left( y\right) \simeq g\left( {x, y}\right) \) . Use (2) to get an index \( e \) such that \n\n\[ \n{\varphi }_{e}\left( y\right) = {\varphi }_{f\left( e\right) }\left( y\right) \n\] \n\n\[ \n= g\left( {e, y}\right) \text{.} \n\] \n\nThis concludes the proof.
|
Yes
|
The two statements in Lemma 28.23 are true. Specifically, for every partial computable function \( g\left( {x, y}\right) \), there is an index e such that for every \( y \) ,\n\n\[ \n{\varphi }_{e}\left( y\right) \simeq g\left( {e, y}\right) \n\]
|
Proof. The ingredients are already implicit in the discussion of the halting problem above. Let \( \operatorname{diag}\left( x\right) \) be a computable function which for each \( x \) returns an index for the function \( {f}_{x}\left( y\right) \simeq {\varphi }_{x}\left( {x, y}\right) \), i.e.\n\n\[ \n{\varphi }_{\operatorname{diag}\left( x\right) }\left( y\right) \simeq {\varphi }_{x}\left( {x, y}\right) \n\]\n\nThink of diag as a function that transforms a program for a 2-ary function into a program for a 1-ary function, obtained by fixing the original program as its first argument. The function diag can be defined formally as follows: first define \( s \) by\n\n\[ \ns\left( {x, y}\right) \simeq {\operatorname{Un}}^{2}\left( {x, x, y}\right) \n\]\n\nwhere \( {\mathrm{{Un}}}^{2} \) is a 3-ary function that is universal for partial computable 2-ary functions. Then, by the \( s - m - n \) theorem, we can find a primitive recursive function diag satisfying\n\n\[ \n{\varphi }_{\operatorname{diag}\left( x\right) }\left( y\right) \simeq s\left( {x, y}\right) \n\]\n\nNow, define the function \( l \) by\n\n\[ \nl\left( {x, y}\right) \simeq g\left( {\operatorname{diag}\left( x\right), y}\right) .\n\]\n\nand let \( \ulcorner l\urcorner \) be an index for \( l \) . Finally, let \( e = \operatorname{diag}\left( {\ulcorner l\urcorner }\right) \) . Then for every \( y \), we have\n\n\[ \n{\varphi }_{e}\left( y\right) \simeq {\varphi }_{\operatorname{diag}\left( {\ulcorner l\urcorner }\right) }\left( y\right) \n\]\n\n\[ \n\simeq {\varphi }_{\ulcorner l\urcorner }\left( {\ulcorner l\urcorner, y}\right) \n\]\n\n\[ \n\simeq l\left( {\ulcorner l\urcorner, y}\right) \n\]\n\n\[ \n\simeq g\left( {\operatorname{diag}\left( {\ulcorner l\urcorner }\right), y}\right) \n\]\n\n\[ \n\simeq g\left( {e, y}\right) \n\]\n\nas required.
|
Yes
|
Theorem 28.25. There is no partial computable function \( f \) with the following property: whenever \( {W}_{e} \) is computable, then \( f\left( e\right) \) is defined and \( {\varphi }_{f\left( e\right) } \) is its characteristic function.
|
Proof. Let \( f \) be any computable function; we will construct an \( e \) such that \( {W}_{e} \) is computable, but \( {\varphi }_{f\left( e\right) } \) is not its characteristic function. Using the fixed point theorem, we can find an index \( e \) such that\n\n\[ \n{\varphi }_{e}\left( y\right) \simeq \left\{ \begin{array}{ll} 0 & \text{ if }y = 0\text{ and }{\varphi }_{f\left( e\right) }\left( 0\right) \downarrow = 0 \\ \text{ undefined } & \text{ otherwise. } \end{array}\right.\n\]\n\nThat is, \( e \) is obtained by applying the fixed-point theorem to the function defined by\n\n\[ \ng\left( {x, y}\right) \simeq \left\{ \begin{array}{ll} 0 & \text{ if }y = 0\text{ and }{\varphi }_{f\left( x\right) }\left( 0\right) \downarrow = 0 \\ \text{ undefined } & \text{ otherwise. } \end{array}\right.\n\]\nInformally, we can see that \( g \) is partial computable, as follows: on input \( x \) and \( y \), the algorithm first checks to see if \( y \) is equal to 0 . If it is, the algorithm computes \( f\left( x\right) \), and then uses the universal machine to compute \( {\varphi }_{f\left( x\right) }\left( 0\right) \) . If this last computation halts and returns 0 , the algorithm returns 0 ; otherwise, the algorithm doesn't halt.\n\nBut now notice that if \( {\varphi }_{f\left( e\right) }\left( 0\right) \) is defined and equal to 0, then \( {\varphi }_{e}\left( y\right) \) is defined exactly when \( y \) is equal to 0, so \( {W}_{e} = \{ 0\} \) . If \( {\varphi }_{f\left( e\right) }\left( 0\right) \) is not defined, or is defined but not equal to 0, then \( {W}_{e} = \varnothing \) . Either way, \( {\varphi }_{f\left( e\right) } \) is not the characteristic function of \( {W}_{e} \), since it gives the wrong answer on input 0 .
|
Yes
|
Lemma 28.26. Suppose \( f\left( {x, y}\right) \) is primitive recursive. Let \( g \) be defined by\n\n\[ g\left( x\right) \simeq {\mu yf}\left( {x, y}\right) = 0. \]\n\nThen \( g \) is represented by a lambda term.
|
Proof. The idea is roughly as follows. Given \( x \), we will use the fixed-point lambda term \( Y \) to define a function \( {h}_{x}\left( n\right) \) which searches for a \( y \) starting at \( n \) ; then \( g\left( x\right) \) is just \( {h}_{x}\left( 0\right) \) . The function \( {h}_{x} \) can be expressed as the solution of a fixed-point equation:\n\n\[ {h}_{x}\left( n\right) \simeq \left\{ \begin{array}{ll} n & \text{ if }f\left( {x, n}\right) = 0 \\ {h}_{x}\left( {n + 1}\right) & \text{ otherwise. } \end{array}\right. \]\n\nHere are the details. Since \( f \) is primitive recursive, it is represented by some term \( F \) . Remember that we also have a lambda term \( D \) such that \( D\left( {M, N,\overline{0}}\right) \rightarrow \) \( M \) and \( D\left( {M, N,\overline{1}}\right) \rightarrow N \) . Fixing \( x \) for the moment, to represent \( {h}_{x} \) we want to find a term \( H \) (depending on \( x \) ) satisfying\n\n\[ H\left( \bar{n}\right) \equiv D\left( {\bar{n}, H\left( {S\left( \bar{n}\right) }\right), F\left( {x,\bar{n}}\right) }\right) . \]\n\nWe can do this using the fixed-point term \( Y \) . First, let \( U \) be the term\n\n\[ {\lambda h}.{\lambda z}.D\left( {z,\left( {h\left( {Sz}\right) }\right), F\left( {x, z}\right) }\right) ,\]\n\nand then let \( H \) be the term \( {YU} \) . Notice that the only free variable in \( H \) is \( x \) . Let us show that \( H \) satisfies the equation above.\n\nBy the definition of \( Y \), we have\n\n\[ H = {YU} \equiv U\left( {YU}\right) = U\left( H\right) \]\n\nIn particular, for each natural number \( n \), we have\n\n\[ H\left( \bar{n}\right) \equiv U\left( {H,\bar{n}}\right) \]\n\n\[ \rightarrow D\left( {\bar{n}, H\left( {S\left( \bar{n}\right) }\right), F\left( {x,\bar{n}}\right) }\right) ,\]\n\nas required. Notice that if you substitute a numeral \( \bar{m} \) for \( x \) in the last line, the expression reduces to \( \bar{n} \) if \( F\left( {\bar{m},\bar{n}}\right) \) reduces to \( \overline{0} \), and it reduces to \( H\left( {S\left( \bar{n}\right) }\right) \) if \( F\left( {\bar{m},\bar{n}}\right) \) reduces to any other numeral.\n\nTo finish off the proof, let \( G \) be \( {\lambda x}.H\left( \overline{0}\right) \) . Then \( G \) represents \( g \) ; in other words, for every \( m, G\left( \bar{m}\right) \) reduces to reduces to \( \overline{g\left( m\right) } \), if \( g\left( m\right) \) is defined, and has no normal form otherwise.
|
Yes
|
Even Machine: The following Turing machine halts if, and only if, there are an even number of 1's on the tape (under the assumption that all 1's come before the first 0 on the tape).
|
The state diagram corresponds to the following transition function:\n\n\[ \delta \left( {{q}_{0},1}\right) = \left\langle {{q}_{1},1, R}\right\rangle \]\n\n\[ \delta \left( {{q}_{1},1}\right) = \left\langle {{q}_{0},1, R}\right\rangle \]\n\n\[ \delta \left( {{q}_{1},0}\right) = \left\langle {{q}_{1},0, R}\right\rangle \]\n\nThe above machine halts only when the input is an even number of strokes. Otherwise, the machine (theoretically) continues to operate indefinitely. For any machine and input, it is possible to trace through the configurations of the machine in order to determine the output. We will give a formal definition of configurations later. For now, we can intuitively think of configurations as a series of diagrams showing the state of the machine at any point in time during operation. Configurations show the content of the tape, the state of the machine and the location of the read/write head.\n\nLet us trace through the configurations of the even machine if it is started with an input of four 1's. In this case, we expect that the machine will halt. We will then run the machine on an input of three 1's, where the machine will run forever.\n\nThe machine starts in state \( {q}_{0} \), scanning the leftmost 1 . We can represent the initial state of the machine as follows:\n\n\[ \vartriangleright {1}_{0}{1110}\ldots \]\n\nThe above configuration is straightforward. As can be seen, the machine starts in state one, scanning the leftmost 1 . This is represented by a subscript of the state name on the first 1 . The applicable instruction at this point is \( \delta \left( {{q}_{0},1}\right) = \) \( \left\langle {{q}_{1},1, R}\right\rangle \), and so the machine moves right on the tape and changes to state \( {q}_{1} \).\n\n\[ \vartriangleright {11}_{1}{110}\ldots \]\n\nSince the machine is now in state \( {q}_{1} \) scanning a 1, we have to \
|
Yes
|
The machine table for the even machine is:
|
<table><thead><tr><th></th><th>0</th><th>1</th></tr></thead><tr><td>\\( {q}_{0} \\)</td><td></td><td>\\( 1,{q}_{1}, R \\)</td></tr><tr><td>\\( {q}_{1} \\)</td><td>\\( 0,{q}_{1},0 \\)</td><td>\\( 1,{q}_{0}, R \\)</td></tr></table>\n\nAs we can see, the machine halts when scanning a blank in state \\( {q}_{0} \\) .
|
No
|
Before building a doubler machine, it is important to come up with a strategy for solving the problem. Since the machine (as we have formulated it) cannot remember how many 1 's it has read, we need to come up with a way to keep track of all the 1 's on the tape. One such way is to separate the output from the input with a 0 . The machine can then erase the first 1 from the input, traverse over the rest of the input, leave a 0 , and write two new 1 's. The machine will then go back and find the second 1 in the input, and double that one as well. For each one 1 of input, it will write two 1 's of output. By erasing the input as the machine goes, we can guarantee that no 1 is missed or doubled twice. When the entire input is erased, there will be \( {2n}{1}^{\prime }\mathrm{s} \) left on the tape.
|
The state diagram of the resulting Turing machine is depicted in Figure 29.2.
|
No
|
Example 29.12. Addition: Build a machine that, when given an input of two non-empty strings of \( 1 \) ’s of length \( n \) and \( m \), computes the function \( f\left( {n, m}\right) = \) \( n + m \) .
|
We want to come up with a machine that starts with two blocks of strokes on the tape and halts with one block of strokes. We first need a method to carry out. The input strokes are separated by a blank, so one method would be to write a stroke on the square containing the blank, and erase the first (or last) stroke. This would result in a block of \( n + m{1}^{\prime } \) s. Alternatively, we could proceed in a similar way to the doubler machine, by erasing a stroke from the first block, and adding one to the second block of strokes until the first block has been removed completely. We will proceed with the former example.
|
No
|
Example 29.13. Halting States. To elucidate this concept, let us begin with an alteration of the even machine. Instead of having the machine halt in state \( {q}_{0} \) if the input is even, we can add an instruction to send the machine into a halt state.
|
Let us further expand the example. When the machine determines that the input is odd, it never halts. We can alter the machine to include a reject state by replacing the looping instruction with an instruction to go to a reject state \( r \) .
|
No
|
Example 29.14. Combining Machines: Design a machine that computes the function \( f\left( {m, n}\right) = 2\left( {m + n}\right) \) .
|
In order to build this machine, we can combine two machines we are already familiar with: the addition machine, and the doubler. We begin by drawing a state diagram for the addition machine.\n\n\n\nInstead of halting at state \( {q}_{2} \), we want to continue operation in order to double the output. Recall that the doubler machine erases the first stroke in the input and writes two strokes in a separate output. Let's add an instruction to make sure the tape head is reading the first stroke of the output of the addition\nmachine. \n\nIt is now easy to double the input-all we have to do is connect the doubler machine onto state \( {q}_{4} \) . This requires renaming the states of the doubler machine so that they start at \( {q}_{4} \) instead of \( {q}_{0} \) -this way we don’t end up with two starting states. The final diagram should look as in Figure 29.3.
|
Yes
|
Theorem 30.1. There are functions from \( \mathbb{N} \) to \( \mathbb{N} \) which are not Turing computable.
|
Proof. We know that the set of finite strings of symbols from a denumerable alphabet is enumerable. This gives us that the set of descriptions of Turing machines, as a subset of the finite strings from the enumerable vocabulary \( \left\{ {{q}_{0},{q}_{1},\ldots ,\vartriangleright ,{\sigma }_{1},{\sigma }_{2},\ldots }\right\} \), is itself enumerable. Since every Turing computable function is computed by some (in fact, many) Turing machines, this means that the set of all Turing computable functions from \( \mathbb{N} \) to \( \mathbb{N} \) is also enumerable.\n\nOn the other hand, the set of all functions from \( \mathbb{N} \) to \( \mathbb{N} \) is not enumerable. This follows immediately from the fact that not even the set of all functions of one argument from \( \mathbb{N} \) to the set \( \{ 0,1\} \) is enumerable. If all functions were computable by some Turing machine we could enumerate the set of all functions. So there are some functions that are not Turing computable.
|
Yes
|
Lemma 30.5. The function \( s \) is not Turing computable.
|
Proof. We suppose, for contradiction, that the function \( s \) is Turing computable. Then there would be a Turing machine \( S \) that computes \( s \) . We may assume, without loss of generality, that when \( S \) halts, it does so while scanning the first square. This machine can be \
|
No
|
Theorem 30.6 (Unsolvability of the Halting Problem). The halting problem is unsolvable, i.e., the function \( h \) is not Turing computable.
|
Proof. Suppose \( h \) were Turing computable, say, by a Turing machine \( H \) . We could use \( H \) to build a Turing machine that computes \( s \) : First, make a copy of the input (separated by a blank). Then move back to the beginning, and run \( H \) . We can clearly make a machine that does the former, and if \( H \) existed, we would be able to \
|
No
|
Proposition 30.8. If \( m < k \), then \( \tau \left( {M, w}\right) \vDash \bar{m} < \bar{k} \)
|
Proof. Exercise.
|
No
|
Lemma 30.10. If \( M \) run on input \( w \) is in a halting configuration after \( n \) steps, then \( \chi \left( {M, w, n}\right) \vDash \alpha \left( {M, w}\right) \) .
|
Proof. Suppose that \( M \) halts for input \( w \) after \( n \) steps. There is some state \( q \) , square \( m \), and symbol \( \sigma \) such that:\n\n1. After \( n \) steps, \( M \) is in state \( q \) scanning square \( m \) on which \( \sigma \) appears.\n\n2. The transition function \( \delta \left( {q,\sigma }\right) \) is undefined.\n\n\( \chi \left( {M, w, n}\right) \) is the description of this configuration and will include the clauses \( {Q}_{q}\left( {\bar{m},\bar{n}}\right) \) and \( {S}_{\sigma }\left( {\bar{m},\bar{n}}\right) \) . These clauses together imply \( \alpha \left( {M, w}\right) \) :\n\n\[ \exists x\exists y\left( {\mathop{\bigvee }\limits_{{\langle q,\sigma \rangle \in X}}\left( {{Q}_{q}\left( {x, y}\right) \land {S}_{\sigma }\left( {x, y}\right) }\right) }\right) \]\n\nsince \( {Q}_{{q}^{\prime }}\left( {\bar{m},\bar{n}}\right) \land {S}_{{\sigma }^{\prime }}\left( {\bar{m},\bar{n}}\right) \vDash \mathop{\bigvee }\limits_{{\langle q,\sigma \rangle \in X}}\left( {{Q}_{q}\left( {\bar{m},\bar{n}}\right) \land {S}_{\sigma }\left( {\bar{m},\bar{n}}\right) }\right) \), as \( \left\langle {{q}^{\prime },{\sigma }^{\prime }}\right\rangle \in X \) .\n\nSo if \( M \) halts for input \( w \), then there is some \( n \) such that \( \chi \left( {M, w, n}\right) \vDash \) \( \alpha \left( {M, w}\right) \) .
|
Yes
|
Lemma 30.12. If \( M \) halts on input \( w \), then \( \tau \left( {M, w}\right) \rightarrow \alpha \left( {M, w}\right) \) is valid.
|
Proof. By Lemma 30.11, we know that, for any time \( n \), the description \( \chi \left( {M, w, n}\right) \) of the configuration of \( M \) at time \( n \) is entailed by \( \tau \left( {M, w}\right) \) . Suppose \( M \) halts after \( k \) steps. It will be scanning square \( m \), say. Then \( \chi \left( {M, w, k}\right) \) describes a halting configuration of \( M \), i.e., it contains as conjuncts both \( {Q}_{q}\left( {\bar{m},\bar{k}}\right) \) and \( {S}_{\sigma }\left( {\bar{m},\bar{k}}\right) \) with \( \delta \left( {q,\sigma }\right) \) undefined. Thus, by Lemma 30.10, \( \chi \left( {M, w, k}\right) \vDash \alpha \left( {M, w}\right) \) . But since \( \tau \left( {M, w}\right) \vDash \chi \left( {M, w, k}\right) \), we have \( \tau \left( {M, w}\right) \vDash \alpha \left( {M, w}\right) \) and therefore \( \tau \left( {M, w}\right) \rightarrow \alpha \left( {M, w}\right) \) is valid.
|
Yes
|
Lemma 30.13. If \( \vDash \tau \left( {M, w}\right) \rightarrow \alpha \left( {M, w}\right) \), then \( M \) halts on input \( w \) .
|
Proof. Consider the \( {\mathcal{L}}_{M} \) -structure \( \mathfrak{M} \) with domain \( \mathbb{N} \) which interprets 0 as 0, \( {}^{\prime } \) as the successor function, and \( < \) as the less-than relation, and the predicates \( {Q}_{q} \) and \( {S}_{\sigma } \) as follows:\n\n\[ \begin{array}{l} {Q}_{q}^{\mathfrak{M}} = \{ \langle m, n\rangle : \begin{array}{l} \text{ started on }w,\text{ after }n\text{ steps,} \\ \text{ M is in state }q\text{ scanning square }m\} \end{array} \\ {S}_{\sigma }^{\mathfrak{M}} = \{ \langle m, n\rangle : \begin{array}{l} \text{ started on }w,\text{ after }n\text{ steps,} \\ \text{ square }m\text{ of }M\text{ contains symbol }\sigma \} \end{array} \end{array} \]\n\nIn other words, we construct the structure \( \mathfrak{M} \) so that it describes what \( M \) started on input \( w \) actually does, step by step. Clearly, \( \mathfrak{M} \vDash \tau \left( {M, w}\right) \) . If \( \vDash \tau \left( {M, w}\right) \rightarrow \alpha \left( {M, w}\right) \), then also \( \mathfrak{M} \vDash \alpha \left( {M, w}\right) \), i.e.,\n\n\[ \mathfrak{M} \vDash \exists x\exists y\left( {\mathop{\bigvee }\limits_{{\langle q,\sigma \rangle \in X}}\left( {{Q}_{q}\left( {x, y}\right) \land {S}_{\sigma }\left( {x, y}\right) }\right) }\right) . \]\n\nAs \( \left| \mathfrak{M}\right| = \mathbb{N} \), there must be \( m, n \in \mathbb{N} \) so that \( \mathfrak{M} \vDash {Q}_{q}\left( {\bar{m},\bar{n}}\right) \land {S}_{\sigma }\left( {\bar{m},\bar{n}}\right) \) for some \( q \) and \( \sigma \) such that \( \delta \left( {q,\sigma }\right) \) is undefined. By the definition of \( \mathfrak{M} \), this means that \( M \) started on input \( w \) after \( n \) steps is in state \( q \) and reading symbol \( \sigma \), and the transition function is undefined, i.e., \( M \) has halted.
|
Yes
|
Any theory axiomatized by a finite set of sentences is axiomatizable, since any finite set is decidable. Thus, \( \mathbf{Q} \), for instance, is axiomatizable.
|
Schematically axiomatized theories like PA are also axiomatizable. For to test if \( \psi \) is among the axioms of PA, i.e., to compute the function \( {\chi }_{X} \) where \( {\chi }_{X}\left( \psi \right) = 1 \) if \( \psi \) is an axiom of \( \mathbf{{PA}} \) and \( = 0 \) otherwise, we can do the following: First, check if \( \psi \) is one of the axioms of \( \mathbf{Q} \) . If it is, the answer is \
|
No
|
Theorem 31.14. If \( \Gamma \) is a consistent and axiomatizable theory in \( {\mathcal{L}}_{A} \) which represents all computable functions and decidable relations, then \( \Gamma \) is not complete.
|
To say that \( \Gamma \) is not complete is to say that for at least one sentence \( \varphi \) , \( \Gamma \nvdash \varphi \) and \( \Gamma \nvdash \neg \varphi \) . Such a sentence is called independent (of \( \Gamma \) ). We can in fact relatively quickly prove that there must be independent sentences. But the power of Gödel's proof of the theorem lies in the fact that it exhibits a specific example of such an independent sentence. The intriguing construction produces a sentence \( {G}_{\Gamma } \), called a Gödel sentence for \( \Gamma \), which is unprovable because in \( \Gamma ,{G}_{\Gamma } \) is equivalent to the claim that \( {G}_{\Gamma } \) is unprovable in \( \Gamma \) . It does so constructively, i.e., given an axiomatization of \( \Gamma \) and a description of the proof system, the proof gives a method for actually writing down \( {G}_{\Gamma } \) .
|
Yes
|
Theorem 31.15. If \( \Gamma \) is a consistent theory that represents every decidable relation, then \( \Gamma \) is not decidable.
|
Proof. Suppose \( \Gamma \) were decidable. We show that if \( \Gamma \) represents every decidable relation, it must be inconsistent.\n\nDecidable properties (one-place relations) are represented by formulas with one free variable. Let \( {\varphi }_{0}\left( x\right) ,{\varphi }_{1}\left( x\right) ,\ldots \), be a computable enumeration of all such formulas. Now consider the following set \( D \subseteq \mathbb{N} \) :\n\n\[ D = \left\{ {n : \Gamma \vdash \neg {\varphi }_{n}\left( \bar{n}\right) }\right\} \]\n\nThe set \( D \) is decidable, since we can test if \( n \in D \) by first computing \( {\varphi }_{n}\left( x\right) \), and from this \( \neg {\varphi }_{n}\left( \bar{n}\right) \) . Obviously, substituting the term \( \bar{n} \) for every free occurrence of \( x \) in \( {\varphi }_{n}\left( x\right) \) and prefixing \( \varphi \left( \bar{n}\right) \) by \( \neg \) is a mechanical matter. By assumption, \( \Gamma \) is decidable, so we can test if \( \neg \varphi \left( \bar{n}\right) \in \Gamma \) . If it is, \( n \in D \), and if it isn’t, \( n \notin D \) . So \( D \) is likewise decidable.\n\nSince \( \Gamma \) represents all decidable properties, it represents \( D \) . And the formulas which represent \( D \) in \( \Gamma \) are all among \( {\varphi }_{0}\left( x\right) ,{\varphi }_{1}\left( x\right) ,\ldots \) So let \( d \) be a number such that \( {\varphi }_{d}\left( x\right) \) represents \( D \) in \( \Gamma \) . If \( d \notin D \), then, since \( {\varphi }_{d}\left( x\right) \) represents \( D,\Gamma \vdash \neg {\varphi }_{d}\left( \bar{d}\right) \) . But that means that \( d \) meets the defining condition of \( D \) , and so \( d \in D \) . This contradicts \( d \notin D \) . So by indirect proof, \( d \in D \) .\n\nSince \( d \in D \), by the definition of \( D,\Gamma \vdash \neg {\varphi }_{d}\left( \bar{d}\right) \) . On the other hand, since \( {\varphi }_{d}\left( x\right) \) represents \( D \) in \( \Gamma ,\Gamma \vdash {\varphi }_{d}\left( \bar{d}\right) \) . Hence, \( \Gamma \) is inconsistent.
|
Yes
|
Theorem 31.16. If \( \Gamma \) is axiomatizable and complete it is decidable.
|
Proof. Any inconsistent theory is decidable, since inconsistent theories contain all sentences, so the answer to the question \
|
No
|
If \( \Gamma \) is consistent, axiomatizable, and represents every decidable property, it is not complete.
|
Proof. If \( \Gamma \) were complete, it would be decidable by the previous theorem (since it is axiomatizable and consistent). But since \( \Gamma \) represents every decidable property, it is not decidable, by the first theorem.
|
Yes
|
Recall that if \( {k}_{0},\ldots ,{k}_{n - 1} \) is a sequence of numbers, then the code of the sequence \( \left\langle {{k}_{0},\ldots ,{k}_{n - 1}}\right\rangle \) in the power-of-primes coding is\n\n\[ \n{2}^{{k}_{0} + 1} \cdot {3}^{{k}_{1} + 1}\cdots \cdot {p}_{n - 1}^{{k}_{n - 1}} \n\]\n\nwhere \( {p}_{i} \) is the \( i \) -th prime (starting with \( {p}_{0} = 2 \) ).
|
So for instance, the formula \( {v}_{0} = \mathrm{o} \), or, more explicitly, \( = \left( {{v}_{0},{c}_{0}}\right) \), has the Gödel number\n\n\[ \n\left\langle {{\mathrm{c}}_{ = },{\mathrm{c}}_{\left( \prime \right. },{\mathrm{c}}_{{v}_{0}},{\mathrm{c}}_{,},{\mathrm{c}}_{{c}_{0}},{\mathrm{c}}_{)}}\right\rangle .\n\]\n\nHere, \( {\mathrm{c}}_{ = } \) is \( \langle 0,7\rangle = {2}^{0 + 1} \cdot {3}^{7 = 1},{\mathrm{c}}_{{v}_{0}} \) is \( \langle 1,0\rangle = {2}^{1 + 1} \cdot {3}^{0 + 1} \), etc. So * \( = {\left( {v}_{0},{c}_{0}\right) }^{\# } \) is\n\n\[ \n{2}^{{\mathrm{c}}_{ = } + 1} \cdot {3}^{{\mathrm{c}}_{(} + 1} \cdot {5}^{{\mathrm{c}}_{{v}_{0}} + 1} \cdot {7}^{{\mathrm{c}}_{,} + 1} \cdot {11}^{{\mathrm{c}}_{{c}_{0}} + 1} \cdot {13}^{{\mathrm{c}}_{)} + 1} =\n\]\n\n\[ \n{2}^{{2}^{1} \cdot {3}^{8} + 1} \cdot {3}^{{2}^{1} \cdot {3}^{9} + 1} \cdot {5}^{{2}^{2} \cdot {3}^{1} + 1} \cdot {7}^{{2}^{1} \cdot {3}^{11} + 1} \cdot {11}^{{2}^{3} \cdot {3}^{1} + 1} \cdot {13}^{{2}^{1} \cdot {3}^{10} + 1} =\n\]\n\n\[ \n{2}^{13123} \cdot {3}^{39367} \cdot {5}^{13} \cdot {7}^{354295} \cdot {11}^{25} \cdot {13}^{118099}.\n\]
|
Yes
|
Proposition 32.5. The relations \( \operatorname{Term}\left( x\right) \) and \( \operatorname{ClTerm}\left( x\right) \) which hold iff \( x \) is the Gödel number of a term or a closed term, respectively, are primitive recursive.
|
Proof. A sequence of symbols \( s \) is a term iff there is a sequence \( {s}_{0},\ldots ,{s}_{k - 1} = s \) of terms which records how the term \( s \) was formed from constant symbols and variables according to the formation rules for terms. To express that such a putative formation sequence follows the formation rules it has to be the case that, for each \( i < k \), either\n\n1. \( {s}_{i} \) is a variable \( {v}_{j} \), or\n\n2. \( {s}_{i} \) is a constant symbol \( {c}_{j} \), or\n\n3. \( {s}_{i} \) is built from \( n \) terms \( {t}_{1},\ldots ,{t}_{n} \) occurring prior to place \( i \) using an \( n \) - place function symbol \( {f}_{j}^{n} \) .\n\nTo show that the corresponding relation on Gödel numbers is primitive recursive, we have to express this condition primitive recursively, i.e., using primitive recursive functions, relations, and bounded quantification.\n\nSuppose \( y \) is the number that codes the sequence \( {s}_{0},\ldots ,{s}_{k - 1} \), i.e., \( y = \) \( \left\langle {{}^{\# }{s}_{0}{}^{\# },\ldots ,{}^{\# }{s}_{k - 1}{}^{\# }}\right\rangle \) . It codes a formation sequence for the term with Gödel number \( x \) iff for all \( i < k \) :\n\n1. \( \operatorname{Var}\left( {\left( y\right) }_{i}\right) \), or\n\n2. \( \operatorname{Const}\left( {\left( y\right) }_{i}\right) \), or\n\n3. there is an \( n \) and a number \( z = \left\langle {{z}_{1},\ldots ,{z}_{n}}\right\rangle \) such that each \( {z}_{l} \) is equal to some \( {\left( y\right) }_{{i}^{\prime }} \) for \( {i}^{\prime } < i \) and\n\n\[{\left( y\right) }_{i} = {}^{ * }{f}_{j}^{n}{\left( {}^{\# } \frown \text{ flatten }\left( z\right) \frown {}^{ * }\right) }^{\# },\] \n\nand moreover \( {\left( y\right) }_{k - 1} = x \) . (The function flatten \( \left( z\right) \) turns the sequence \( \left\langle {{}^{\# }{t}_{1}{}^{\# },\ldots ,{}^{\# }{t}_{n}{}^{\# }}\right\rangle \) into \( {}^{\# }{t}_{1},\ldots ,{t}_{n}{}^{\# } \) and is primitive recursive.)\n\nThe indices \( j, n \), the Gödel numbers \( {z}_{l} \) of the terms \( {t}_{l} \), and the code \( z \) of the sequence \( \left\langle {{z}_{1},\ldots ,{z}_{n}}\right\rangle \), in (3) are all less than \( y \) . We can replace \( k \) above with len \( \left( y\right) \) . Hence we can express \
|
Yes
|
Proposition 32.6. The function \( \operatorname{num}\left( n\right) = {}^{\# }{n}^{\# } \) is primitive recursive.
|
Proof. We define num \( \left( n\right) \) by primitive recursion:\n\n\[ \operatorname{num}\left( 0\right) = {}^{ * }{\mathrm{o}}^{\# } \]\n\n\[ \operatorname{num}\left( {n + 1}\right) = {}^{\# }/{\left( {}^{\# } \frown \operatorname{num}\left( n\right) \frown {}^{\# }\right) }^{\# }.\]
|
Yes
|
Proposition 32.7. The relation \( \operatorname{Atom}\left( x\right) \) which holds iff \( x \) is the Gödel number of an atomic formula, is primitive recursive.
|
Proof. The number \( x \) is the Gödel number of an atomic formula iff one of the following holds:\n\n1. There are \( n, j < x \), and \( z < x \) such that for each \( i < n,\operatorname{Term}\left( {\left( z\right) }_{i}\right) \) and \( x = \)\n\n\[ \n{}^{\# }{P}_{j}^{n}{\left( {}^{\# } \frown \text{ flatten }\left( z\right) \frown {}^{\# }\right) }^{\# }.\n\]\n\n2. There are \( {z}_{1},{z}_{2} < x \) such that \( \operatorname{Term}\left( {z}_{1}\right) ,\operatorname{Term}\left( {z}_{2}\right) \), and \( x = \)\n\n\[ \n{}^{\# } = {\left( {}^{\# } \frown {z}_{1} \frown {}^{\# }{,}^{\# } \frown {z}_{2} \frown {}^{\# }\right) }^{\# }.\n\]\n\n3. \( x = {}^{\# } \bot {}^{\# } \).
|
No
|
Proposition 32.8. The relation \( \operatorname{Frm}\left( x\right) \) which holds iff \( x \) is the Gödel number of a formula is primitive recursive.
|
Proof. A sequence of symbols \( s \) is a formula iff there is formation sequence \( {s}_{0} \) , \( \ldots ,{s}_{k - 1} = s \) of formula which records how \( s \) was formed from atomic formulas according to the formation rules. The code for each \( {s}_{i} \) (and indeed of the code of the sequence \( \left\langle {{s}_{0},\ldots ,{s}_{k - 1}}\right\rangle \) ) is less than the code \( x \) of \( s \) .
|
No
|
Proposition 32.9. The relation \( \operatorname{FreeOcc}\left( {x, z, i}\right) \), which holds iff the \( i \) -th symbol of the formula with Gödel number \( x \) is a free occurrence of the variable with Gödel number \( z \), is primitive recursive.
|
Proof. Exercise.
|
No
|
Proposition 32.10. The property \( \operatorname{Sent}\left( x\right) \) which holds iff \( x \) is the Gödel number of a sentence is primitive recursive.
|
Proof. A sentence is a formula without free occurrences of variables. So \( \operatorname{Sent}\left( x\right) \) holds iff\n\n\[ \left( {\forall i < \operatorname{len}\left( x\right) }\right) \left( {\forall z < x}\right) \]\n\n\[ \left( {\left( {\exists j < z}\right) z = {}^{ * }{v}_{j}{}^{\# } \rightarrow \neg \operatorname{FreeOcc}\left( {x, z, i}\right) }\right) . \]
|
No
|
Proposition 32.11. There is a primitive recursive function \( \operatorname{Subst}\left( {x, y, z}\right) \) with the property that\n\n\[ \operatorname{Subst}\left( {{}^{\# }{\varphi }^{\# },{}^{\# }{t}^{\# },{}^{\# }{u}^{\# }}\right) = {}^{\# }\varphi {\left\lbrack t/u\right\rbrack }^{\# } \]
|
Proof. We can then define a function hSubst by primitive recursion as follows:\n\n\[ \operatorname{hSubst}\left( {x, y, z,0}\right) = \Lambda \]\n\n\( \operatorname{hSubst}\left( {x, y, z, i + 1}\right) = \)\n\n\[ \left\{ \begin{array}{ll} \operatorname{hSubst}\left( {x, y, z, i}\right) \frown y & \text{ if }\operatorname{FreeOcc}\left( {x, z, i}\right) \\ \operatorname{append}\left( {\operatorname{hSubst}\left( {x, y, z, i}\right) ,{\left( x\right) }_{i}}\right) & \text{ otherwise. } \end{array}\right. \]\n\n\( \operatorname{Subst}\left( {x, y, z}\right) \) can now be defined as hSubst \( \left( {x, y, z,\operatorname{len}\left( x\right) }\right) \) .
|
Yes
|
Proposition 32.12. The relation \( \operatorname{FreeFor}\left( {x, y, z}\right) \), which holds iff the term with Gödel number \( y \) is free for the variable with Gödel number \( z \) in the formula with Gödel number \( x \), is primitive recursive.
|
Proof. Exercise.
|
No
|
Consider the very simple derivation\n\n\[ \n\\begin{aligned} \\frac{\\varphi \\Rightarrow \\varphi }{\\varphi \\land \\psi } & \\Rightarrow \\varphi \\land \\mathrm{L} \\\\ & \\Rightarrow \\left( {\\varphi \\land \\psi }\\right) \\rightarrow \\varphi \\end{aligned} \\rightarrow \\mathrm{R} \n\]
|
The Gödel number of the initial sequent would be \( {p}_{0} = \\left\\langle {0,{}^{\\# }\\varphi \\Rightarrow {\\varphi }^{\\# }}\\right\\rangle \) . The Gödel number of the derivation ending in the conclusion of \( \\land \\mathrm{L} \) would be \( {p}_{1} = \) \( \\left\\langle {1,{p}_{0},{}^{ * }\\varphi \\land \\psi \\Rightarrow {\\varphi }^{\\# },9}\\right\\rangle \) (1 since \( \\land \\mathrm{L} \) has one premise, the Gödel number of the conclusion \( \\varphi \\land \\psi \\Rightarrow \\varphi \), and 9 is the number coding \( \\land \\mathrm{L} \) ). The Gödel number of the entire derivation then is \( \\left\\langle {1,{p}_{1},{}^{\\# } \\Rightarrow \\left( {\\varphi \\land \\psi }\\right) \\rightarrow \\varphi {)}^{\\# },{14}}\\right\\rangle \), i.e.,\n\n\[ \n\\langle 1,\\langle 1,\\langle 0,{}^{\\# }\\varphi \\Rightarrow \\varphi {)}^{\\# }\\rangle ,{}^{\\# }\\varphi \\land \\psi \\Rightarrow {\\varphi }^{\\# },9\\rangle ,{}^{\\# } \\Rightarrow \\left( {\\varphi \\land \\psi }\\right) \\rightarrow {\\varphi }^{\\# },{14}\\rangle . \n\]
|
Yes
|
The property \( \operatorname{Correct}\left( p\right) \) which holds iff the last inference in the derivation \( \pi \) with Gödel number \( p \) is correct, is primitive recursive.
|
Proof. \( \Gamma \Rightarrow \Delta \) is an initial sequent if either there is a sentence \( \varphi \) such that \( \Gamma \Rightarrow \Delta \) is \( \varphi \Rightarrow \varphi \), or there is a term \( t \) such that \( \Gamma \Rightarrow \Delta \) is \( \varnothing \Rightarrow t = t \) . In terms of Gödel numbers, InitSeq \( \left( s\right) \) holds iff\n\n\[ \left( {\exists x < s}\right) \left( {\operatorname{Sent}\left( x\right) \land s = \langle \langle x\rangle ,\langle x\rangle \rangle }\right) \vee \]\n\n\[ \left( {\exists t < s}\right) \left( {\operatorname{Term}\left( t\right) \land s = \left\langle {0,\left\langle {{}^{\# } = \left( {{}^{\# } \frown t \frown {}^{\# },{}^{\# } \frown t \frown {}^{\# }}\right) {}^{\# }}\right\rangle }\right\rangle }\right) .\n\nWe also have to show that for each rule of inference \( R \) the relation FollowsBy \( {}_{R}\left( p\right) \) is primitive recursive, where FollowsBy \( {}_{R}\left( p\right) \) holds iff \( p \) is the Gödel number of derivation \( \pi \), and the end-sequent of \( \pi \) follows by a correct application of \( R \) from the immediate sub-derivations of \( \pi \) .\n\nA simple case is that of the \( \land \mathrm{R} \) rule. If \( \pi \) ends in a correct \( \land \mathrm{R} \) inference, it looks like this:\n\n\n\nSo, the last inference in the derivation \( \pi \) is a correct application of \( \land \mathrm{R} \) iff there are sequences of sentences \( \Gamma \) and \( \Delta \) as well as two sentences \( \varphi \) and \( \psi \) such that the end-sequent of \( {\pi }_{1} \) is \( \Gamma \Rightarrow \Delta ,\varphi \), the end-sequent of \( {\pi }_{2} \) is \( \Gamma \Rightarrow \Delta ,\psi \), and the end-sequent of \( \pi \) is \( \Gamma \Rightarrow \Delta ,\varphi \land \psi \) . We just have to translate this into Gödel numbers. If \( s = {}^{\# }\Gamma \Rightarrow {\Delta }^{\# } \) then \( {\left( s\right) }_{0} = {}^{\# }{\Gamma }^{\# } \) and \( {\left( s\right) }_{1} = {}^{\# }{\Delta }^{\# } \) . So, FollowsBy \( {}_{\land \mathrm{R}}\left( p\right) \) holds iff\n\n\[ \left( {\exists g < p}\right) \left( {\exists d < p}\right) \left( {\exists a < p}\right) \left( {\exists b < p}\right) \]\n\n\[ \operatorname{EndSequent}\left( p\right) = \left\langle {g, d \frown \left\langle {{}^{\# }\left( {{}^{\# } \frown a \frown {}^{\# } \land {}^{\# } \frown b \frown {}^{\# }}\right) {}^{\# }}\right\rangle }\right\rangle \land \]\n\n\[ \text{EndSequent}\left( {\left( p\right) }_{1}\right) = \langle g, d \frown \langle a\rangle \rangle \land \]\n\n\[ \text{EndSequent}\left( {\left( p\right) }_{2}\right) = \langle g, d \frown \langle b\rangle \rangle \land \]\n\n\[ {\left( p\right) }_{0} = 2 \land \operatorname{LastRule}\left( p\right) = {10}. \]\n\nThe individual lines express, respectively,\
|
Yes
|
Proposition 32.16. The relation \( \operatorname{Deriv}\left( p\right) \) which holds if \( p \) is the Gödel number of a correct derivation \( \pi \), is primitive recursive.
|
Proof. A derivation \( \pi \) is correct if every one of its inferences is a correct application of a rule, i.e., if every one of its sub-derivations ends in a correct inference. So, \( \operatorname{Deriv}\left( d\right) \) iff\n\n\[ \left( {\forall i < \operatorname{len}\left( {\operatorname{SubtreeSeq}\left( p\right) }\right) }\right) \operatorname{Correct}({\left( \operatorname{SubtreeSeq}\left( p\right) \right) }_{i}. \]
|
No
|
Consider the very simple derivation\n\n\[ \frac{\frac{{\left\lbrack \varphi \land \psi \right\rbrack }^{1}}{\varphi } \land \text{ Elim }}{1\frac{\left( {\varphi \land \psi }\right) \rightarrow \varphi }{\left( {\varphi \land \psi }\right) \rightarrow \varphi } \rightarrow \text{ Intro }} \]
|
The Gödel number of the assumption would be \( {d}_{0} = \left\langle {0,{}^{\# }\varphi \land {\psi }^{\# },1}\right\rangle \) . The Gödel number of the derivation ending in the conclusion of \( \land \) Elim would be \( {d}_{1} = \left\langle {1,{d}_{0},{}^{ * }{\varphi }^{\# },0,2}\right\rangle \) (1 since \( \land \) Elim has one premise, the Gödel number of conclusion \( \varphi ,0 \) because no assumption is discharged, and 2 is the number coding \( \land \) Elim). The Gödel number of the entire derivation then is \( \left\langle {1,{d}_{1},{}^{\# }{\left( \left( \varphi \land \psi \right) \rightarrow \varphi \right) }^{\# },1,5}\right\rangle \) , i.e.,\n\n\[ \langle 1,\langle 1,\langle 0,{}^{\# }{\left( \varphi \land \psi \right) }^{\# },1\rangle ,{}^{\# }{\varphi }^{\# },0,2\rangle ,{}^{\# }{\left( \left( \varphi \land \psi \right) \rightarrow \varphi \right) }^{\# },1,5\rangle . \]
|
Yes
|
Proposition 32.20. The following relations are primitive recursive:\n\n1. \( \varphi \) occurs as an assumption in \( \delta \) with label \( n \) .\n\n2. All assumptions in \( \delta \) with label \( n \) are of the form \( \varphi \) (i.e., we can discharge the assumption \( \varphi \) using label \( n \) in \( \delta \) ).
|
Proof. We have to show that the corresponding relations between Gödel numbers of formulas and Gödel numbers of derivations are primitive recursive.\n\n1. We want to show that \( \operatorname{Assum}\left( {x, d, n}\right) \), which holds if \( x \) is the Gödel number of an assumption of the derivation with Gödel number \( d \) labelled \( n \) , is primitive recursive. This is the case if the derivation with Gödel number \( \langle 0, x, n\rangle \) is a sub-derivation of \( d \) . Note that the way we code derivations is a special case of the coding of trees introduced in section 27.12, so the primitive recursive function SubtreeSeq \( \left( d\right) \) gives a sequence of Gödel numbers of all sub-derivations of \( d \) (of length a most \( d \) ). So we can define\n\n\[ \operatorname{Assum}\left( {x, d, n}\right) \Leftrightarrow \left( {\exists i < d}\right) {\left( \operatorname{SubtreeSeq}\left( d\right) \right) }_{i} = \langle 0, x, n\rangle . \]\n\n\n2. We want to show that Discharge \( \left( {x, d, n}\right) \), which holds if all assumptions with label \( n \) in the derivation with Gödel number \( d \) all are the formula with Gödel number \( x \) . But this relation holds iff \( \left( {\forall y < d}\right) (\operatorname{Assum}\left( {y, d, n}\right) \rightarrow \) \( y = x) \) .
|
Yes
|
Proposition 32.21. The property \( \operatorname{Correct}\left( d\right) \) which holds iff the last inference in the derivation \( \delta \) with Gödel number \( d \) is correct, is primitive recursive.
|
Proof. Here we have to show that for each rule of inference \( R \) the relation \( {\text{FollowsBy}}_{R}\left( d\right) \) is primitive recursive, where FollowsBy \( {}_{R}\left( d\right) \) holds iff \( d \) is the Gödel number of derivation \( \delta \), and the end-formula of \( \delta \) follows by a correct application of \( R \) from the immediate sub-derivations of \( \delta \) .\n\nA simple case is that of the \( \land \) Intro rule. If \( \delta \) ends in a correct \( \land \) Intro inference, it looks like this:\n\n\n\nThen the Gödel number \( d \) of \( \delta \) is \( \left\langle {2,{d}_{1},{d}_{2},{}^{\# }{\left( \varphi \land \psi \right) }^{\# },0, k}\right\rangle \) where \( \operatorname{EndFmla}\left( {d}_{1}\right) = \) \( {}^{ * }{\varphi }^{\# } \), EndFmla \( \left( {d}_{2}\right) = {}^{ * }{B}^{\# }, n = 0 \), and \( k = 1 \) . So we can define FollowsBy \( {}_{\land \text{Intro }}\left( d\right) \) as\n\n\( {\left( d\right) }_{0} = 2 \land \) DischargeLabel \( \left( d\right) = 0 \land \) LastRule \( \left( d\right) = 1 \land \)\n\n\( \operatorname{EndFmla}\left( d\right) = {}^{\# }\left( {{}^{\# } \frown \operatorname{EndFmla}\left( {\left( d\right) }_{1}\right) \frown {}^{\# }{ \land }^{\# } \frown \operatorname{EndFmla}\left( {\left( d\right) }_{2}\right) \frown {}^{\# }{)}^{\# }.}\right. \)
|
Yes
|
Proposition 32.22. The relation \( \operatorname{Deriv}\left( d\right) \) which holds if \( d \) is the Gödel number of a correct derivation \( \delta \), is primitive recursive.
|
Proof. A derivation \( \delta \) is correct if every one of its inferences is a correct application of a rule, i.e., if every one of its sub-derivations ends in a correct inference. So, \( \operatorname{Deriv}\left( d\right) \) iff\n\n\[ \left( {\forall i < \operatorname{len}\left( {\operatorname{SubtreeSeq}\left( d\right) }\right) }\right) \operatorname{Correct}\left( {\left( \operatorname{SubtreeSeq}\left( d\right) \right) }_{i}\right) \]
|
Yes
|
Proposition 32.23. The relation OpenAssum \( \left( {z, d}\right) \) that holds if \( z \) is the Gödel number of an undischarged assumption \( \varphi \) of the derivation \( \delta \) with Gödel number \( d \), is primitive recursive.
|
Proof. An occurrence of an assumption is discharged if it occurs with label \( n \) in a sub-derivation of \( \delta \) that ends in a rule with discharge label \( n \) . So \( \varphi \) is an undischarged assumption of \( \delta \) if at least one of its occurrences is not discharged in \( \delta \) . We must be careful: \( \delta \) may contain both discharged and undischarged occurrences of \( \varphi \) .\n\nConsider a sequence \( {\delta }_{0},\ldots ,{\delta }_{k} \) where \( {\delta }_{0} = d,{\delta }_{k} \) is the assumption \( {\left\lbrack \varphi \right\rbrack }^{n} \) (for some \( n \) ), and \( {\delta }_{i} \) is an immediate sub-derivation of \( {\delta }_{i + 1} \) . If such a sequence exists in which no \( {\delta }_{i} \) ends in an inference with discharge label \( n \), then \( \varphi \) is an undischarged assumption of \( \delta \) .\n\nThe primitive recursive function SubtreeSeq \( \left( d\right) \) provides us with a sequence of Gödel numbers of all sub-derivations of \( \delta \) . Any sequence of Gödel numbers\n\nof sub-derivations of \( \delta \) is a subsequence of it. Being a subsequence of is a primitive recursive relation: Subseq \( \left( {s,{s}^{\prime }}\right) \) holds iff \( \left( {\forall i < \operatorname{len}\left( s\right) }\right) \exists j < \operatorname{len}\left( {s}^{\prime }\right) {\left( s\right) }_{i} = \) \( {\left( s\right) }_{j} \) . Being an immediate sub-derivation is as well: Subderiv \( \left( {d,{d}^{\prime }}\right) \) iff \( (\exists j < \) \( \left. {\left( {d}^{\prime }\right) }_{0}\right) d = {\left( {d}^{\prime }\right) }_{j} \) . So we can define OpenAssum \( \left( {z, d}\right) \) by\n\n\[ \left( {\exists s < \mathrm{{SubtreeSeq}}\left( d\right) }\right) (\mathrm{{Subseq}}\left( {s,\mathrm{{SubtreeSeq}}\left( d\right) }\right) \land {\left( s\right) }_{0} = d \land \]\n\n\[ \left( {\exists n < d}\right) \left( {{\left( s\right) }_{\operatorname{len}\left( s\right) \dot{ - }1} = \langle 0, z, n\rangle \land }\right. \]\n\n\[ \left( {\forall i < \left( {\operatorname{len}\left( s\right) - 1}\right) }\right) \left( \right. \text{Subderiv}\left( {{\left( s\right) }_{i},{\left( s\right) }_{i + 1}}\right) \rbrack \land \]\n\n\[ \text{DischargeLabel}\left( {\left. {{\left( s\right) }_{i + 1}) \neq n}\right) )}\right) \text{.} \]
|
Yes
|
Consider the very simple derivation\n\n1. \( \psi \rightarrow \left( {\psi \vee \varphi }\right) \)\n\n2. \( \left( {\psi \rightarrow \left( {\psi \vee \varphi }\right) }\right) \rightarrow \left( {\varphi \rightarrow \left( {\psi \rightarrow \left( {\psi \vee \varphi }\right) }\right) }\right) \)\n\n3. \( \varphi \rightarrow \left( {\psi \rightarrow \left( {\psi \vee \varphi }\right) }\right) \)
|
The Gödel number of this derivation would simply be\n\n\[ \n{\langle }^{\# }\psi \rightarrow {\left( \psi \vee \varphi \right) }^{\# },{}^{\# }\left( {\psi \rightarrow \left( {\psi \vee \varphi }\right) }\right) \rightarrow {\left( \varphi \rightarrow \left( \psi \rightarrow \left( \psi \vee \varphi \right) \right) \right) }^{\# },{}^{\# }\varphi \rightarrow {\left( \psi \rightarrow \left( \psi \vee \varphi \right) \right) }^{\# }\rangle .\n\]
|
Yes
|
Theorem 33.2. A function is representable in \( \mathbf{Q} \) if and only if it is computable.
|
There are two directions to proving the theorem. The left-to-right direction is fairly straightforward once arithmetization of syntax is in place. The other direction requires more work. Here is the basic idea: we pick \
|
No
|
Lemma 33.3. Every function that is representable in \( \mathbf{Q} \) is computable.
|
Proof. Let’s first give the intuitive idea for why this is true. If \( f\left( {{x}_{0},\ldots ,{x}_{k}}\right) \) is representable in \( \mathbf{Q} \), there is a formula \( \varphi \left( {{x}_{0},\ldots ,{x}_{k}, y}\right) \) such that\n\n\[ \mathbf{Q} \vdash {\varphi }_{f}\left( {\overline{{n}_{0}},\ldots ,\overline{{n}_{k}},\bar{m}}\right) \;\text{ iff }\;m = f\left( {{n}_{0},\ldots ,{n}_{k}}\right) .\n\]\n\nTo compute \( f \), we do the following. List all the possible derivations \( \delta \) in the language of arithmetic. This is possible to do mechanically. For each one, check if it is a derivation of a formula of the form \( {\varphi }_{f}\left( {\overline{{n}_{0}},\ldots ,\overline{{n}_{k}},\bar{m}}\right) \) . If it is, \( m \) must be \( = f\left( {{n}_{0},\ldots ,{n}_{k}}\right) \) and we’ve found the value of \( f \) . The search terminates because \( \mathbf{Q} \vdash {\varphi }_{f}\left( {\overline{{n}_{0}},\ldots ,\overline{{n}_{k}},\overline{f\left( {{n}_{0},\ldots ,{n}_{k}}\right) }}\right) \), so eventually we find a \( \delta \) of the right sort.\n\nThis is not quite precise because our procedure operates on derivations and formulas instead of just on numbers, and we haven't explained exactly why \
|
Yes
|
Theorem 33.7. Suppose \( {x}_{0},\ldots ,{x}_{n} \) are (pairwise) relatively prime. Let \( {y}_{0},\ldots ,{y}_{n} \) be any numbers. Then there is a number \( z \) such that\n\n\[ z \equiv {y}_{0}{\;\operatorname{mod}\;{x}_{0}} \]\n\n\[ z \equiv {y}_{1}{\;\operatorname{mod}\;{x}_{1}} \]\n\n\[ \vdots \]\n\n\[ z \equiv {y}_{n}{\;\operatorname{mod}\;{x}_{n}}. \]
|
Here is how we will use the Chinese Remainder theorem: if \( {x}_{0},\ldots ,{x}_{n} \) are bigger than \( {y}_{0},\ldots ,{y}_{n} \) respectively, then we can take \( z \) to code the sequence \( \left\langle {{y}_{0},\ldots ,{y}_{n}}\right\rangle \) . To recover \( {y}_{i} \), we need only divide \( z \) by \( {x}_{i} \) and take the remainder. To use this coding, we will need to find suitable values for \( {x}_{0},\ldots ,{x}_{n} \) .\n\nA couple of observations will help us in this regard. Given \( {y}_{0},\ldots ,{y}_{n} \), let\n\n\[ j = \max \left( {n,{y}_{0},\ldots ,{y}_{n}}\right) + 1, \]\n\nand let\n\n\[ {x}_{0} = 1 + j\text{!} \]\n\n\[ {x}_{1} = 1 + 2 \cdot j! \]\n\n\[ {x}_{2} = 1 + 3 \cdot j! \]\n\n\[ \vdots \]\n\n\[ {x}_{n} = 1 + \left( {n + 1}\right) \cdot j! \]\n\nThen two things are true:\n\n1. \( {x}_{0},\ldots ,{x}_{n} \) are relatively prime.\n\n2. For each \( i,{y}_{i} < {x}_{i} \).\n\nTo see that (1) is true, note that if \( p \) is a prime number and \( p \mid {x}_{i} \) and \( p \mid {x}_{k} \) , then \( p \mid 1 + \left( {i + 1}\right) j \) ! and \( p \mid 1 + \left( {k + 1}\right) j \) !. But then \( p \) divides their difference,\n\n\[ \left( {1 + \left( {i + 1}\right) j!}\right) - \left( {1 + \left( {k + 1}\right) j!}\right) = \left( {i - k}\right) j! \]\n\nSince \( p \) divides \( 1 + \left( {i + 1}\right) j \) !, it can’t divide \( j \) ! as well (otherwise, the first division would leave a remainder of 1). So \( p \) divides \( i - k \), since \( p \) divides \( \left( {i - k}\right) j \) !. But \( \left| {i - k}\right| \) is at most \( n \), and we have chosen \( j > n \), so this implies that \( p \mid j \) !, again a contradiction. So there is no prime number dividing both \( {x}_{i} \) and \( {x}_{k} \) . Clause (2) is easy: we have \( {y}_{i} < j < j! < {x}_{i} \) .
|
Yes
|
Lemma 33.8. If \( h \) can be defined from \( f \) and \( g \) using primitive recursion, it can be defined from \( f, g \), the functions zero, succ, \( {P}_{i}^{n} \), add, mult, \( {\chi }_{ = } \), using composition and regular minimization.
|
Proof. First, define an auxiliary function \( \widehat{h}\left( {\overrightarrow{x}, y}\right) \) which returns the least number \( d \) such that \( d \) codes a sequence which satisfies\n\n1. \( {\left( d\right) }_{0} = f\left( \overrightarrow{x}\right) \), and\n\n2. for each \( i < y,{\left( d\right) }_{i + 1} = g\left( {\overrightarrow{x}, i,{\left( d\right) }_{i}}\right) \) ,\n\nwhere now \( {\left( d\right) }_{i} \) is short for \( \beta \left( {d, i}\right) \) . In other words, \( \widehat{h} \) returns the sequence \( \langle h\left( {\overrightarrow{x},0}\right), h\left( {\overrightarrow{x},1}\right) ,\ldots, h\left( {\overrightarrow{x}, y}\right) \rangle \) . We can write \( \widehat{h} \) as\n\n\[ \widehat{h}\left( {\overrightarrow{x}, y}\right) = {\mu d}\left( {\beta \left( {d,0}\right) = f\left( \overrightarrow{x}\right) \land \left( {\forall i < y}\right) \beta \left( {d, i + 1}\right) = g\left( {\overrightarrow{x}, i,\beta \left( {d, i}\right) }\right) .}\right. \]\n\nNote: no primitive recursion is needed here, just minimization. The function we minimize is regular because of the beta function lemma Lemma 33.4.\n\nBut now we have\n\n\[ h\left( {\overrightarrow{x}, y}\right) = \beta \left( {\widehat{h}\left( {\overrightarrow{x}, y}\right), y}\right) \]\n\nso \( h \) can be defined from the basic functions using just composition and regular minimization.
|
Yes
|
Lemma 33.13. Given natural numbers \( n \) and \( m \), if \( n \neq m \), then \( \mathbf{Q} \vdash \bar{n} \neq \bar{m} \) .
|
Proof. Use induction on \( n \) to show that for every \( m \), if \( n \neq m \), then \( Q \vdash \bar{n} \neq \bar{m} \) .\n\nIn the base case, \( n = 0 \) . If \( m \) is not equal to 0, then \( m = k + 1 \) for some natural number \( k \) . We have an axiom that says \( \forall {x0} \neq {x}^{\prime } \) . By a quantifier axiom, replacing \( x \) by \( \bar{k} \), we can conclude \( 0 \neq {\bar{k}}^{\prime } \) . But \( {\bar{k}}^{\prime } \) is just \( \bar{m} \) .\n\nIn the induction step, we can assume the claim is true for \( n \), and consider \( n + 1 \) . Let \( m \) be any natural number. There are two possibilities: either \( m = 0 \) or for some \( k \) we have \( m = k + 1 \) . The first case is handled as above. In the second case, suppose \( n + 1 \neq k + 1 \) . Then \( n \neq k \) . By the induction hypothesis for \( n \) we have \( \widehat{\mathbf{Q}} \vdash \bar{n} \neq \bar{k} \) . We have an axiom that says \( \forall x\forall y{x}^{\prime } = {y}^{\prime } \rightarrow x = y \) . Using a quantifier axiom, we have \( {\bar{n}}^{\prime } = {\bar{k}}^{\prime } \rightarrow \bar{n} = \bar{k} \) . Using propositional logic, we can conclude, in \( \mathbf{Q},\bar{n} \neq \bar{k} \rightarrow {\bar{n}}^{\prime } \neq {\bar{k}}^{\prime } \) . Using modus ponens, we can conclude \( {\bar{n}}^{\prime } \neq {\bar{k}}^{\prime } \), which is what we want, since \( {\bar{k}}^{\prime } \) is \( \bar{m} \) .
|
Yes
|
Proposition 33.14. The addition function \( \operatorname{add}\left( {{x}_{0},{x}_{1}}\right) = {x}_{0} + {x}_{1} \) is represented in \( \mathbf{Q} \) by
|
\[ y = \left( {{x}_{0} + {x}_{1}}\right) \]
|
No
|
Lemma 33.15. \( \mathbf{Q} \vdash \left( {\bar{n} + \bar{m}}\right) = \overline{n + m} \)
|
Proof. We prove this by induction on \( m \) . If \( m = 0 \), the claim is that \( \mathbf{Q} \vdash (\bar{n} + \) o) \( = \bar{n} \) . This follows by axiom \( {Q}_{4} \) . Now suppose the claim for \( m \) ; let’s prove the claim for \( m + 1 \), i.e., prove that \( \mathbf{Q} \vdash \left( {\bar{n} + \overline{m + 1}}\right) = \overline{n + m + 1} \) . Note that \( m + 1 \) is just \( {\bar{m}}^{\prime } \), and \( \overline{n + m + 1} \) is just \( {\overline{n + m}}^{\prime } \) . By axiom \( {Q}_{5},\mathbf{Q} \vdash \left( {\bar{n} + {\bar{m}}^{\prime }}\right) = \) \( {\left( \bar{n} + \bar{m}\right) }^{\prime } \) . By induction hypothesis, \( \mathbf{Q} \vdash \left( {\bar{n} + \bar{m}}\right) = \overline{n + m} \) . So \( \mathbf{Q} \vdash \left( {\bar{n} + {\bar{m}}^{\prime }}\right) = \) \( \overline{n + {m}^{\prime }} \) .
|
Yes
|
Proposition 33.16. The multiplication function \( \operatorname{mult}\left( {{x}_{0},{x}_{1}}\right) = {x}_{0} \cdot {x}_{1} \) is represented in \( \mathbf{Q} \) by
|
\[ y = \left( {{x}_{0} \times {x}_{1}}\right) \]
|
No
|
Lemma 33.17. \( \mathbf{Q} \vdash \left( {\bar{n} \times \bar{m}}\right) = \overline{n \cdot m} \)
|
Proof. Exercise.
|
No
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.