Q
stringlengths
4
3.96k
A
stringlengths
1
3k
Result
stringclasses
4 values
Proposition 18.14 (Monotony). If \( \Gamma \subseteq \Delta \) and \( \Gamma \vdash \varphi \), then \( \Delta \vdash \varphi \) .
Proof. Any derivation of \( \varphi \) from \( \Gamma \) is also a derivation of \( \varphi \) from \( \Delta \) .
Yes
Proposition 18.15 (Transitivity). If \( \Gamma \vdash \varphi \) and \( \{ \varphi \} \cup \Delta \vdash \psi \), then \( \Gamma \cup \Delta \vdash \psi \) .
Proof. If \( \Gamma \vdash \varphi \), there is a derivation \( {\delta }_{0} \) of \( \varphi \) with all undischarged assumptions in \( \Gamma \) . If \( \{ \varphi \} \cup \Delta \vdash \psi \), then there is a derivation \( {\delta }_{1} \) of \( \psi \) with all undischarged assumptions in \( \{ \varphi \} \cup \Delta \) . Now consider:\n\n![c5962287-92b4-4003-ac67-b592d0231929_266_0.jpg](images/c5962287-92b4-4003-ac67-b592d0231929_266_0.jpg)\n\nThe undischarged assumptions are now all among \( \Gamma \cup \Delta \), so this shows \( \Gamma \cup \) \( \Delta \vdash \psi \) .
Yes
Proposition 18.16. \( \Gamma \) is inconsistent iff \( \Gamma \vdash \varphi \) for every sentence \( \varphi \) .
Proof. Exercise.
No
Proposition 18.17 (Compactness). 1. If \( \Gamma \vdash \varphi \) then there is a finite subset \( {\Gamma }_{0} \subseteq \Gamma \) such that \( {\Gamma }_{0} \vdash \varphi \).
1. If \( \Gamma \vdash \varphi \), then there is a derivation \( \delta \) of \( \varphi \) from \( \Gamma \) . Let \( {\Gamma }_{0} \) be the set of undischarged assumptions of \( \delta \) . Since any derivation is finite, \( {\Gamma }_{0} \) can only contain finitely many sentences. So, \( \delta \) is a derivation of \( \varphi \) from a finite \( {\Gamma }_{0} \subseteq \Gamma \) .
Yes
Proposition 18.18. If \( \Gamma \vdash \varphi \) and \( \Gamma \cup \{ \varphi \} \) is inconsistent, then \( \Gamma \) is inconsistent.
Proof. Let the derivation of \( \varphi \) from \( \Gamma \) be \( {\delta }_{1} \) and the derivation of \( \bot \) from \( \Gamma \cup \) \( \{ \varphi \} \) be \( {\delta }_{2} \) . We can then derive:\n\n![c5962287-92b4-4003-ac67-b592d0231929_267_0.jpg](images/c5962287-92b4-4003-ac67-b592d0231929_267_0.jpg)\n\nIn the new derivation, the assumption \( \varphi \) is discharged, so it is a derivation from \( \Gamma \) .
No
Proposition 18.19. \( \Gamma \vdash \varphi \) iff \( \Gamma \cup \{ \neg \varphi \} \) is inconsistent.
Proof. First suppose \( \Gamma \vdash \varphi \), i.e., there is a derivation \( {\delta }_{0} \) of \( \varphi \) from undischarged assumptions \( \Gamma \) . We obtain a derivation of \( \bot \) from \( \Gamma \cup \{ \neg \varphi \} \) as follows:\n\n![c5962287-92b4-4003-ac67-b592d0231929_268_0.jpg](images/c5962287-92b4-4003-ac67-b592d0231929_268_0.jpg)\n\nNow assume \( \Gamma \cup \{ \neg \varphi \} \) is inconsistent, and let \( {\delta }_{1} \) be the corresponding derivation of \( \bot \) from undischarged assumptions in \( \Gamma \cup \{ \neg \varphi \} \) . We obtain a derivation of \( \varphi \) from \( \Gamma \) alone by using \( { \bot }_{C} \) :\n\n![c5962287-92b4-4003-ac67-b592d0231929_268_1.jpg](images/c5962287-92b4-4003-ac67-b592d0231929_268_1.jpg)
Yes
Proposition 18.20. If \( \Gamma \vdash \varphi \) and \( \neg \varphi \in \Gamma \), then \( \Gamma \) is inconsistent.
Proof. Suppose \( \Gamma \vdash \varphi \) and \( \neg \varphi \in \Gamma \) . Then there is a derivation \( \delta \) of \( \varphi \) from \( \Gamma \) . Consider this simple application of the \( \neg \) Elim rule:\n\n![c5962287-92b4-4003-ac67-b592d0231929_268_2.jpg](images/c5962287-92b4-4003-ac67-b592d0231929_268_2.jpg)\n\nSince \( \neg \varphi \in \Gamma \), all undischarged assumptions are in \( \Gamma \), this shows that \( \Gamma \vdash \bot \) . \( ▱ \)
Yes
Proposition 18.21. If \( \Gamma \cup \{ \varphi \} \) and \( \Gamma \cup \{ \neg \varphi \} \) are both inconsistent, then \( \Gamma \) is inconsistent.
Proof. There are derivations \( {\delta }_{1} \) and \( {\delta }_{2} \) of \( \bot \) from \( \Gamma \cup \{ \varphi \} \) and \( \bot \) from \( \Gamma \cup \{ \neg \varphi \} \) , respectively. We can then derive\n\n![c5962287-92b4-4003-ac67-b592d0231929_268_3.jpg](images/c5962287-92b4-4003-ac67-b592d0231929_268_3.jpg)\n\nSince the assumptions \( \varphi \) and \( \neg \varphi \) are discharged, this is a derivation of \( \bot \) from \( \Gamma \) alone. Hence \( \Gamma \) is inconsistent.
Yes
Proposition 18.22. 1. Both \( \varphi \land \psi \vdash \varphi \) and \( \varphi \land \psi \vdash \psi \)\n\n2. \( \varphi ,\psi \vdash \varphi \land \psi \) .
Proof. 1. We can derive both\n\n\[ \frac{\varphi \land \psi }{\varphi } \land \operatorname{Elim}\;\frac{\varphi \land \psi }{\psi } \land \operatorname{Elim} \]\n\n2. We can derive:\n\n\[ \frac{\varphi }{\varphi \land \psi } \land \text{ Intro } \]\n\n\( ▱ \)
Yes
Proposition 18.23. 1. \( \varphi \vee \psi ,\neg \varphi ,\neg \psi \) is inconsistent.
Proof. 1. Consider the following derivation:\n\n\[ \frac{\varphi \vee \psi \;\frac{\neg \varphi \;{\left\lbrack \varphi \right\rbrack }^{1}}{\bot }\neg \text{Elim}\;\frac{\neg \psi \;{\left\lbrack \psi \right\rbrack }^{1}}{\bot }\neg \text{Elim}}{\bot }\text{VElim} \]\n\nThis is a derivation of \( \bot \) from undischarged assumptions \( \varphi \vee \psi ,\neg \varphi \), and \( \neg \psi \) .
Yes
Proposition 18.24. 1. \( \varphi ,\varphi \rightarrow \psi \vdash \psi \) .
1. We can derive:\n\n\[ \frac{\varphi \rightarrow \psi \;\varphi }{\psi } \rightarrow \text{Elim} \]
Yes
Theorem 18.25. If \( c \) is a constant not occurring in \( \Gamma \) or \( \varphi \left( x\right) \) and \( \Gamma \vdash \varphi \left( c\right) \), then \( \Gamma \vdash \forall {x\varphi }\left( x\right) \) .
Proof. Let \( \delta \) be a derivation of \( \varphi \left( c\right) \) from \( \Gamma \) . By adding a \( \forall \) Intro inference, we obtain a proof of \( \forall {x\varphi }\left( x\right) \) . Since \( c \) does not occur in \( \Gamma \) or \( \varphi \left( x\right) \), the eigenvariable condition is satisfied.
Yes
Proposition 18.26. 1. \( \varphi \left( t\right) \vdash \exists {x\varphi }\left( x\right) \) . 2. \( \forall {x\varphi }\left( x\right) \vdash \varphi \left( t\right) \) .
Proof. 1. The following is a derivation of \( \exists {x\varphi }\left( x\right) \) from \( \varphi \left( t\right) \) : \[ \frac{\varphi \left( t\right) }{\exists {x\varphi }\left( x\right) }\exists \text{ Intro } \] 2. The following is a derivation of \( \varphi \left( t\right) \) from \( \forall {x\varphi }\left( x\right) \) : \[ \frac{\forall {x\varphi }\left( x\right) }{\varphi \left( t\right) }\forall \text{Elim} \]
Yes
Corollary 18.29. If \( \Gamma \) is satisfiable, then it is consistent.
Proof. We prove the contrapositive. Suppose that \( \Gamma \) is not consistent. Then \( \Gamma \vdash \bot \), i.e., there is a derivation of \( \bot \) from undischarged assumptions in \( \Gamma \). By Theorem 18.27, any structure \( \mathfrak{M} \) that satisfies \( \Gamma \) must satisfy \( \bot \). Since \( \mathfrak{M} \mathrel{\text{\vDash \not{} }} \bot \) for every structure \( \mathfrak{M} \), no \( \mathfrak{M} \) can satisfy \( \Gamma \), i.e., \( \Gamma \) is not satisfiable.
Yes
If \( s \) and \( t \) are closed terms, then \( \varphi \left( s\right), s = t \vdash \varphi \left( t\right) \)
\[ \frac{s = t\;\varphi \left( s\right) }{\varphi \left( t\right) } = \text{ Elim } \]
Yes
We derive the sentence\n\n\[ \forall x\forall y\left( {\left( {\varphi \left( x\right) \land \varphi \left( y\right) }\right) \rightarrow x = y}\right) \]\n\nfrom the sentence\n\n\[ \exists x\forall y\left( {\varphi \left( y\right) \rightarrow y = x}\right) \]
We develop the derivation backwards:\n\n\[ \exists x\forall y\left( {\varphi \left( y\right) \rightarrow y = x}\right) \;{\left\lbrack \varphi \left( a\right) \land \varphi \left( b\right) \right\rbrack }^{1} \]\n\n\[ \begin{matrix} \vdots \\ \vdots \\ \frac{a = b}{b\left( {\left( {\varphi \left( a\right) \land \varphi \left( b\right) }\right) \rightarrow a = b}\right) } \rightarrow \text{ Intro } \\ \frac{\forall y\left( {\left( {\varphi \left( a\right) \land \varphi \left( y\right) }\right) \rightarrow a = y}\right) }{\forall x\forall y\left( {\left( {\varphi \left( x\right) \land \varphi \left( y\right) }\right) \rightarrow x = y}\right) }\forall \text{ Intro } \end{matrix} \]\n\nWe'll now have to use the main assumption: since it is an existential formula, we use \( \exists \) Elim to derive the intermediary conclusion \( a = b \) .\n\n\[ {\left\lbrack \forall y\left( \varphi \left( y\right) \rightarrow y = c\right) \right\rbrack }^{2} \]\n\n\[ {\left\lbrack \varphi \left( a\right) \land \varphi \left( b\right) \right\rbrack }^{1} \]\n\n\[ {}_{2}\frac{\exists x\forall y\left( {\varphi \left( y\right) \rightarrow y = x}\right) \;a = b}{{}_{1}\frac{a = b}{\left( \left( \varphi \left( a\right) \land \varphi \left( b\right) \right) \rightarrow a = b\right) }}\exists \text{Elim} \]\n\nThe sub-derivation on the top right is completed by using its assumptions to show that \( a = c \) and \( b = c \) . This requires two separate derivations. The derivation for \( a = c \) is as follows:\n\n\[ \frac{{\left\lbrack \forall y\left( \varphi \left( y\right) \rightarrow y = c\right) \right\rbrack }^{2}}{\frac{\varphi \left( a\right) \rightarrow a = c}{a} = c}\forall \operatorname{Elim}\;\frac{\frac{{\left\lbrack \varphi \left( a\right) \land \varphi \left( b\right) \right\rbrack }^{1}}{\varphi \left( a\right) } \land \operatorname{Elim}}{\varphi \left( a\right) \rightarrow \operatorname{Elim}} \]\n\nFrom \( a = c \) and \( b = c \) we derive \( a = b \) by \( = \) Elim.
Yes
Proposition 18.32. Natural deduction with rules for \( = \) is sound.
Proof. Any formula of the form \( t = t \) is valid, since for every structure \( \mathfrak{M} \) , \( \mathfrak{M} \vDash t = t \) . (Note that we assume the term \( t \) to be ground, i.e., it contains no variables, so variable assignments are irrelevant).\n\nSuppose the last inference in a derivation is \( = \) Elim, i.e., the derivation has the following form:\n\n![c5962287-92b4-4003-ac67-b592d0231929_276_0.jpg](images/c5962287-92b4-4003-ac67-b592d0231929_276_0.jpg)\n\nThe premises \( {t}_{1} = {t}_{2} \) and \( \varphi \left( {t}_{1}\right) \) are derived from undischarged assumptions \( {\Gamma }_{1} \) and \( {\Gamma }_{2} \), respectively. We want to show that \( \varphi \left( {t}_{2}\right) \) follows from \( {\Gamma }_{1} \cup {\Gamma }_{2} \) . Consider a structure \( \mathfrak{M} \) with \( \mathfrak{M} \vDash {\Gamma }_{1} \cup {\Gamma }_{2} \) . By induction hypothesis, \( \mathfrak{M} \vDash \varphi \left( {t}_{1}\right) \) and \( \mathfrak{M} \vDash {t}_{1} = {t}_{2} \) . Therefore, \( {\operatorname{Val}}^{\mathfrak{M}}\left( {t}_{1}\right) = {\operatorname{Val}}^{\mathfrak{M}}\left( {t}_{2}\right) \) . Let \( s \) be any variable assignment, and \( {s}^{\prime } \) be the \( x \) -variant given by \( {s}^{\prime }\left( x\right) = {\operatorname{Val}}^{\mathfrak{M}}\left( {t}_{1}\right) = {\operatorname{Val}}^{\mathfrak{M}}\left( {t}_{2}\right) \) . By Proposition 14.46, \( \mathfrak{M}, s \vDash \varphi \left( {t}_{1}\right) \) iff \( \mathfrak{M},{s}^{\prime } \vDash \varphi \left( x\right) \) iff \( \mathfrak{M}, s \vDash \varphi \left( {t}_{2}\right) \) . Since \( \mathfrak{M} \vDash \varphi \left( {t}_{1}\right) \), we have \( \mathfrak{M} \vDash \varphi \left( {t}_{2}\right) \) .
Yes
Every set of assumptions on its own is a tableau, but it will generally not be closed. (Obviously, it is closed only if the assumptions already contain a pair of signed formulas \( \mathbb{T}\varphi \) and \( \mathbb{F}\varphi \) .)
From a tableau (open or closed) we can obtain a new, larger one by applying one of the rules of inference to a signed formula \( \varphi \) in it. The rule will append one or more signed formulas to the end of any branch containing the occurrence of \( \varphi \) to which we apply the rule. For instance, consider the assumption \( \mathbb{T}\varphi \land \neg \varphi \) . Here is the (open) tableau consisting of just that assumption:\n\n\[ \text{1.}\mathbb{T}\varphi \land \neg \varphi \;\text{Assumption} \]\n\nWe obtain a new tableau from it by applying the \( \land \mathbb{T} \) rule to the assumption. That rule allows us to add two new lines to the tableau, \( \mathbb{T}\varphi \) and \( \mathbb{T}\neg \varphi \) :\n\n\[ \begin{matrix} 1. & \mathbb{T}\varphi \land \neg \varphi & \text{ Assumption } \\ 2. & \mathbb{T}\varphi & \land \mathbb{T}1 \\ 3. & \mathbb{T}\neg \varphi & \land \mathbb{T}1 \end{matrix} \]\n\nWhen we write down tableaux, we record the rules we've applied on the right (e.g., \( \land \mathbb{T}1 \) means that the signed formula on that line is the result of applying the \( \land \mathbb{T} \) rule to the signed formula on line 1). This new tableau now contains additional signed formulas, but to only one \( \left( {\mathbb{T}\neg \varphi }\right) \) can we apply a rule (in this case, the \( \neg \mathbb{T} \) rule). This results in the closed tableau ![c5962287-92b4-4003-ac67-b592d0231929_282_0.jpg](images/c5962287-92b4-4003-ac67-b592d0231929_282_0.jpg)
Yes
Let’s find a closed tableau for the sentence \( \left( {\varphi \land \psi }\right) \rightarrow \varphi \) .
We begin by writing the corresponding assumption at the top of the tableau.\n\n\[ \text{1.}\mathbb{F}\left( {\varphi \land \psi }\right) \rightarrow \varphi \;\text{Assumption} \]\n\nThere is only one assumption, so only one signed formula to which we can apply a rule. (For every signed formula, there is always at most one rule that can be applied: it's the rule for the corresponding sign and main operator of the sentence.) In this case, this means, we must apply \( \rightarrow \mathbb{F} \) .\n\n\[ \begin{matrix} \text{ 1. } & \mathbb{F}\left( {\varphi \land \psi }\right) \rightarrow \varphi \checkmark & \text{ Assumption } \\ \text{ 2. } & \mathbb{T}\varphi \land \psi & \rightarrow \mathbb{F}1 \\ \text{ 3. } & \mathbb{F}\varphi & \rightarrow \mathbb{F}1 \end{matrix} \]\n\nTo keep track of which signed formulas we have applied their corresponding rules to, we write a checkmark next to the sentence. However, only write a checkmark if the rule has been applied to all open branches. Once a signed formula has had the corresponding rule applied in every open branch, we will not have to return to it and apply the rule again. In this case, there is only one branch, so the rule only has to be applied once. (Note that checkmarks are only a convenience for constructing tableaux and are not officially part of the syntax of tableaux.)\n\nThere is one new signed formula to which we can apply a rule: the \( \mathbb{T}\varphi \land \psi \) on line 3. Applying the \( \land \mathbb{T} \) rule results in:\n\n<table><tr><td>1.</td><td>\( \mathbb{F}\left( {\varphi \land \psi }\right) \rightarrow \varphi \checkmark \)</td><td>Assumption</td></tr><tr><td>2.</td><td>\( \mathbb{T}\varphi \land \psi \checkmark \)</td><td>\( \rightarrow \mathbb{F}1 \)</td></tr><tr><td>3.</td><td>\( \mathbb{F}\varphi \)</td><td>\( \rightarrow \mathbb{F}1 \)</td></tr><tr><td>4.</td><td>\( \mathbb{T}\varphi \)</td><td>^T2</td></tr><tr><td>5.</td><td>\( \mathbb{T}\psi \)</td><td>^T2</td></tr><tr><td></td><td>\( \otimes \)</td><td></td></tr></table>\n\nSince the branch now contains both \( \mathbb{T}\varphi \) (on line 4) and \( \mathbb{F}\varphi \) (on line 3), the branch is closed. Since it is the only branch, the tableau is closed. We have found a closed tableau for \( \left( {\varphi \land \psi }\right) \rightarrow \varphi \) .
Yes
Now let’s find a closed tableau for \( \left( {\neg \varphi \vee \psi }\right) \rightarrow \left( {\varphi \rightarrow \psi }\right) \).
We begin with the corresponding assumption:\n\n\[ \text{1.}\mathbb{F}\left( {\neg \varphi \vee \psi }\right) \rightarrow \left( {\varphi \rightarrow \psi }\right) \;\text{Assumption} \]\n\nThe one signed formula in this tableau has main operator \( \rightarrow \) and sign \( \mathbb{F} \), so we apply the \( \rightarrow \mathbb{F} \) rule to it to obtain:\n\n\[ \begin{matrix} \text{ 1. } & \mathbb{F}\left( {\neg \varphi \vee \psi }\right) \rightarrow \left( {\varphi \rightarrow \psi }\right) \checkmark & & \text{ Assumption } \\ \text{ 2. } & \mathbb{T}\neg \varphi \vee \psi & & \rightarrow \mathbb{F}1 \\ \text{ 3. } & \mathbb{F}\left( {\varphi \rightarrow \psi }\right) & & \rightarrow \mathbb{F}1 \end{matrix} \]\n\nWe now have a choice as to whether to apply \( \vee \mathbb{T} \) to line 2 or \( \rightarrow \mathbb{F} \) to line 3 . It actually doesn't matter which order we pick, as long as each signed formula has its corresponding rule applied in every branch. So let's pick the first one. The \( \vee \mathbb{T} \) rule allows the tableau to branch, and the two conclusions of the rule will be the new signed formulas added to the two new branches. This results in: ![c5962287-92b4-4003-ac67-b592d0231929_283_0.jpg](images/c5962287-92b4-4003-ac67-b592d0231929_283_0.jpg)\n\nWe have not applied the \( \rightarrow \mathbb{F} \) rule to line 3 yet: let’s do that now. To save time, we apply it to both branches. Recall that we write a checkmark next to a signed formula only if we have applied the corresponding rule in every open branch. So it's a good idea to apply a rule at the end of every branch that contains the signed formula the rule applies to. That way we won't have to return to that signed formula lower down in the various branches. ![c5962287-92b4-4003-ac67-b592d0231929_284_0.jpg](images/c5962287-92b4-4003-ac67-b592d0231929_284_0.jpg)\n\nThe right branch is now closed. On the left branch, we can still apply the \( \neg \mathbb{T} \) rule to line 4. This results in \( \mathbb{F}\varphi \) and closes the left branch:\n\n![c5962287-92b4-4003-ac67-b592d0231929_284_1.jpg](images/c5962287-92b4-4003-ac67-b592d0231929_284_1.jpg)
Yes
We can give tableaux for any number of signed formulas as assumptions. Often it is also necessary to apply more than one rule that allows branching; and in general a tableau can have any number of branches. For instance, consider a tableau for \( \{ \mathbb{T}\varphi \vee \left( {\psi \land \chi }\right) ,\mathbb{F}\left( {\varphi \vee \psi }\right) \land \left( {\varphi \vee \chi }\right) \} \) .
We start by applying the \( \vee \mathbb{T} \) to the first assumption: ![c5962287-92b4-4003-ac67-b592d0231929_284_2.jpg](images/c5962287-92b4-4003-ac67-b592d0231929_284_2.jpg)\n\nNow we can apply the \( \land \mathbb{F} \) rule to line 2 . We do this on both branches simultaneously, and can therefore check off line 2:\n\n![c5962287-92b4-4003-ac67-b592d0231929_284_3.jpg](images/c5962287-92b4-4003-ac67-b592d0231929_284_3.jpg)\n\n\n\nNow we can apply \( \vee \mathbb{F} \) to all the branches containing \( \varphi \vee \psi \) :\n\n![c5962287-92b4-4003-ac67-b592d0231929_285_0.jpg](images/c5962287-92b4-4003-ac67-b592d0231929_285_0.jpg)\n\nThe leftmost branch is now closed. Let’s now apply \( \vee \mathbb{F} \) to \( \varphi \vee \chi \) :\n\n![c5962287-92b4-4003-ac67-b592d0231929_285_1.jpg](images/c5962287-92b4-4003-ac67-b592d0231929_285_1.jpg)\n\nNote that we moved the result of applying \( \vee \mathbb{F} \) a second time below for clarity. In this instance it would not have been needed, since the justifications would have been the same.\n\nTwo branches remain open, and \( \mathbb{T}\psi \land \chi \) on line 3 remains unchecked. We apply \( \land \mathbb{T} \) to it to obtain a closed tableau:\n\n![c5962287-92b4-4003-ac67-b592d0231929_286_0.jpg](images/c5962287-92b4-4003-ac67-b592d0231929_286_0.jpg)
Yes
Let’s see how we’d give a tableau for the sentence \( \exists x\neg \varphi \left( x\right) \rightarrow \neg \forall {x\varphi }\left( x\right) \) .
Starting as usual, we start by recording the assumption,\n\n\[ \text{1.}\;\mathbb{F}\exists x\neg \varphi \left( x\right) \rightarrow \neg \forall {x\varphi }\left( x\right) \;\text{Assumption} \]\n\nSince the main operator is \( \rightarrow \), we apply the \( \rightarrow \mathbb{F} \) :\n\n1. \( \;\mathbb{F}\exists x\neg \varphi \left( x\right) \rightarrow \neg \forall {x\varphi }\left( x\right) \checkmark \; \) Assumption\n\nThe next line to deal with is 2 . We use \( \exists \mathbb{T} \) . This requires a new constant symbol; since no constant symbols yet occur, we can pick any one, say, a.\n\n1. \( \;\mathbb{F}\exists x\neg \varphi \left( x\right) \rightarrow \neg \forall {x\varphi }\left( x\right) \checkmark \; \) Assumption\n\n2. \( \;\mathbb{T}\exists x\;\neg \varphi \left( x\right) \checkmark \; \rightarrow \mathbb{F}1 \)\n\n3. \( \;\mathbb{F}\neg \forall {x\varphi }\left( x\right) \; \rightarrow \mathbb{F}1 \)\n\n4. \( \mathbb{T}\neg \varphi \left( a\right) \;\exists \mathbb{T}2 \)\n\nNow we apply \( \neg \mathbb{F} \) to line 3 :\n\n1. \( \;\mathbb{F}\exists x\neg \varphi \left( x\right) \rightarrow \neg \forall {x\varphi }\left( x\right) \checkmark \; \) Assumption\n\n2. \( \;\mathbb{T}\exists x\neg \varphi \left( x\right) \checkmark \; \rightarrow \mathbb{F}1 \)\n\n3. \( \;\mathbb{F}\neg \forall {x\varphi }\left( x\right) \checkmark \; \rightarrow \mathbb{F}1 \)\n\n4. \( \;\mathbb{T}\neg \varphi \left( a\right) \;\exists \mathbb{T}2 \)\n\n5. \( \;\mathbb{T}\forall {x\varphi }\left( x\right) \;\neg \mathbb{F} \) 3\n\nWe obtain a closed tableau by applying \( \neg \mathbb{T} \) to line 4, followed by \( \forall \mathbb{T} \) to line 5 .\n\n1. \( \mathbb{F}\exists x\neg \varphi \left( x\right) \rightarrow \neg \forall {x\varphi }\left( x\right) \checkmark \; \) Assumption\n\n2. \( \;\mathbb{T}\exists x\neg \varphi \left( x\right) \checkmark \; \rightarrow \mathbb{F}1 \)\n\n3. \( \;\mathbb{F}\neg \forall {x\varphi }\left( x\right) \checkmark \; \rightarrow \mathbb{F}1 \)\n\n4. \( \mathbb{T}\neg \varphi \left( a\right) \;\exists \mathbb{T}2 \)\n\n5. \( \;\mathbb{T}\forall {x\varphi }\left( x\right) \;\neg \mathbb{F} \) 3\n\n6. \( \mathbb{F}\varphi \left( a\right) \;\neg \mathbb{T}4 \)\n\n7. \( \mathbb{T}\varphi \left( a\right) \;\forall \mathbb{T} \)\n\n\( \otimes \)
Yes
We construct a tableau for the set\n\n\\[ \n\\mathbb{T}\\forall {x\\varphi }\\left( x\\right) ,\\mathbb{T}\\forall {x\\varphi }\\left( x\\right) \\rightarrow \\exists {y\\psi }\\left( y\\right) ,\\mathbb{T}\\neg \\exists {y\\psi }\\left( y\\right) .\n\\]
Starting as usual, we write down the assumptions:\n\n\\[ \n\\text{1.}\\;\\mathbb{T}\\forall {x\\varphi }\\left( x\\right) \\;\\text{Assumption}\n\\]\n\n\\[ \n\\text{2.}\\;\\mathbb{T}\\forall {x\\varphi }\\left( x\\right) \\rightarrow \\exists {y\\psi }\\left( y\\right) \\;\\text{Assumption}\n\\]\n\n\\[ \n\\text{3.}\\;\\mathbb{T}\\neg \\exists {y\\psi }\\left( y\\right) \\;\\text{Assumption}\n\\]\n\nWe begin by applying the \\( \\neg \\mathbb{T} \\) rule to line 3 . A corollary to the rule \
Yes
Proposition 19.13 (Reflexivity). If \( \varphi \in \Gamma \), then \( \Gamma \vdash \varphi \) .
Proof. If \( \varphi \in \Gamma ,\{ \varphi \} \) is a finite subset of \( \Gamma \) and the tableau\n\n1. \( \mathbb{F}\varphi \; \) Assumption\n\n2. \( \mathbb{T}\varphi \; \) Assumption \( \otimes \)\n\nis closed.
No
Proposition 19.15 (Transitivity). If \( \Gamma \vdash \varphi \) and \( \{ \varphi \} \cup \Delta \vdash \psi \), then \( \Gamma \cup \Delta \vdash \psi \) .
Proof. If \( \{ \varphi \} \cup \Delta \vdash \psi \), then there is a finite subset \( {\Delta }_{0} = \left\{ {{\chi }_{1},\ldots ,{\chi }_{n}}\right\} \subseteq \Delta \) such that\n\n\[ \left\{ {\mathbb{F}\psi ,\mathbb{T}\varphi ,\mathbb{T}{\chi }_{1},\ldots ,\mathbb{T}{\chi }_{n}}\right\} \]\n\nhas a closed tableau. If \( \Gamma \vdash \varphi \) then there are \( {\theta }_{1},\ldots ,{\theta }_{m} \) such that\n\n\[ \left\{ {\mathbb{F}\varphi ,\mathbb{T}{\theta }_{1},\ldots ,\mathbb{T}{\theta }_{m}}\right\} \]\n\nhas a closed tableau.\n\nNow consider the tableau with assumptions\n\n\[ \mathbb{F}\psi ,\mathbb{T}{\chi }_{1},\ldots ,\mathbb{T}{\chi }_{n},\mathbb{T}{\theta }_{1},\ldots ,\mathbb{T}{\theta }_{m} \]\n\nApply the Cut rule on \( \varphi \) . This generates two branches, one has \( \mathbb{T}\varphi \) in it, the other \( \mathbb{F}\varphi \) . Thus, on the one branch, all of\n\n\[ \left\{ {\mathbb{F}\psi ,\mathbb{T}\varphi ,\mathbb{T}{\chi }_{1},\ldots ,\mathbb{T}{\chi }_{n}}\right\} \]\n\nare available. Since there is a closed tableau for these assumptions, we can attach it to that branch; every branch through \( \mathbb{T}{\varphi }_{1} \) closes. On the other branch, all of\n\n\[ \left\{ {\mathbb{F}\varphi ,\mathbb{T}{\theta }_{1},\ldots ,\mathbb{T}{\theta }_{m}}\right\} \]\n\nare available, so we can also complete the other side to obtain a closed tableau. This shows \( \Gamma \cup \Delta \vdash \psi \) .
Yes
Proposition 19.17 (Compactness). 1. If \( \Gamma \vdash \varphi \) then there is a finite subset \( {\Gamma }_{0} \subseteq \Gamma \) such that \( {\Gamma }_{0} \vdash \varphi \).
1. If \( \Gamma \vdash \varphi \), then there is a finite subset \( {\Gamma }_{0} = \left\{ {{\psi }_{1},\ldots ,{\psi }_{n}}\right\} \) and a closed tableau for\n\n\[ \mathbb{F}\varphi ,\mathbb{T}{\psi }_{1},\cdots \mathbb{T}{\psi }_{n} \]\n\nThis tableau also shows \( {\Gamma }_{0} \vdash \varphi \).
Yes
Proposition 19.18. If \( \Gamma \vdash \varphi \) and \( \Gamma \cup \{ \varphi \} \) is inconsistent, then \( \Gamma \) is inconsistent.
Proof. There are finite \( {\Gamma }_{0} = \left\{ {{\psi }_{1},\ldots ,{\psi }_{n}}\right\} \) and \( {\Gamma }_{1} = \left\{ {{\chi }_{1},\ldots ,{\chi }_{n}}\right\} \subseteq \Gamma \) such that\n\n\[ \left\{ {\mathbb{F}\varphi ,\mathbb{T}{\psi }_{1},\ldots ,\mathbb{T}{\psi }_{n}}\right\} \]\n\n\[ \left\{ {\mathbb{T}\neg \varphi ,\mathbb{T}{\chi }_{1},\ldots ,\mathbb{T}{\chi }_{m}}\right\} \]\n\nhave closed tableaux. Using the Cut rule on \( \varphi \) we can combine these into a single closed tableau that shows \( {\Gamma }_{0} \cup {\Gamma }_{1} \) is inconsistent. Since \( {\Gamma }_{0} \subseteq \Gamma \) and \( {\Gamma }_{1} \subseteq \Gamma ,{\Gamma }_{0} \cup {\Gamma }_{1} \subseteq \Gamma \), hence \( \Gamma \) is inconsistent.
Yes
Proposition 19.19. \( \Gamma \vdash \varphi \) iff \( \Gamma \cup \{ \neg \varphi \} \) is inconsistent.
Proof. First suppose \( \Gamma \vdash \varphi \), i.e., there is a closed tableau for\n\n\[ \left\{ {\mathbb{F}\varphi ,\mathbb{T}{\psi }_{1},\ldots ,\mathbb{T}{\psi }_{n}}\right\} \]\n\nUsing the \( \neg \mathbb{T} \) rule, this can be turned into a closed tableau for\n\n\[ \left\{ {\mathbb{T}\neg \varphi ,\mathbb{T}{\psi }_{1},\ldots ,\mathbb{T}{\psi }_{n}}\right\} \]\n\nOn the other hand, if there is a closed tableau for the latter, we can turn it into a closed tableau of the former by removing every formula that results from \( \neg \mathbb{T} \) applied to the first assumption \( \mathbb{T}\neg \varphi \) as well as that assumption, and adding the assumption \( \mathbb{F}\varphi \) . For if a branch was closed before because it contained the conclusion of \( \neg \mathbb{T} \) applied to \( \mathbb{T}\neg \varphi \), i.e., \( \mathbb{F}\varphi \), the corresponding branch in the new tableau is also closed. If a branch in the old tableau was closed because it contained the assumption \( \mathbb{T}\neg \varphi \) as well as \( \mathbb{F}\neg \varphi \) we can turn it into a closed branch by applying \( \neg \mathbb{F} \) to \( \mathbb{F}\neg \varphi \) to obtain \( \mathbb{T}\varphi \) . This closes the branch since we added \( \mathbb{F}\varphi \) as an assumption.
Yes
Proposition 19.20. If \( \Gamma \vdash \varphi \) and \( \neg \varphi \in \Gamma \), then \( \Gamma \) is inconsistent.
Proof. Suppose \( \Gamma \vdash \varphi \) and \( \neg \varphi \in \Gamma \) . Then there are \( {\psi }_{1},\ldots ,{\psi }_{n} \in \Gamma \) such that\n\n\[ \left\{ {\mathbb{F}\varphi ,\mathbb{T}{\psi }_{1},\ldots ,\mathbb{T}{\psi }_{n}}\right\} \]\n\nhas a closed tableau. Replace the assumption \( \mathbb{F}\varphi \) by \( \mathbb{T}\neg \varphi \), and insert the conclusion of \( \neg \mathbb{T} \) applied to \( \mathbb{F}\varphi \) after the assumptions. Any sentence in the tableau justified by appeal to line 1 in the old tableau is now justified by appeal to line \( n + 1 \) . So if the old tableau was closed, the new one is. It shows that \( \Gamma \) is inconsistent, since all assumptions are in \( \Gamma \) .
Yes
Proposition 19.21. If \( \Gamma \cup \{ \varphi \} \) and \( \Gamma \cup \{ \neg \varphi \} \) are both inconsistent, then \( \Gamma \) is inconsistent.
Proof. If there are \( {\psi }_{1},\ldots ,{\psi }_{n} \in \Gamma \) and \( {\chi }_{1},\ldots ,{\chi }_{m} \in \Gamma \) such that\n\n\[ \left\{ {\mathbb{T}\varphi ,\mathbb{T}{\psi }_{1},\ldots ,\mathbb{T}{\psi }_{n}}\right\} \]\n\n\[ \left\{ {\mathbb{T}\neg \varphi ,\mathbb{T}{\chi }_{1},\ldots ,\mathbb{T}{\chi }_{m}}\right\} \]\n\nboth have closed tableaux, we can construct a tableau that shows that \( \Gamma \) is inconsistent by using as assumptions \( \mathbb{T}{\psi }_{1},\ldots ,\mathbb{T}{\psi }_{n} \) together with \( \mathbb{T}{\chi }_{1},\ldots \) , \( \mathbb{T}{\chi }_{m} \), followed by an application of the Cut rule, yielding two branches, one starting with \( \mathbb{T}\varphi \), the other with \( \mathbb{F}\varphi \) . Add on the part below the assumptions of the first tableau on the left side. Here, every rule application is still correct, and every branch closes. On the right side, add the part below the assumptions of the seond tableau, with the results of any applications of \( \neg \mathbb{T} \) to \( \mathbb{T}\neg \varphi \) removed.\n\nFor if a branch was closed before because it contained the conclusion of \( \neg \mathbb{T} \) applied to \( \mathbb{T}\neg \varphi \), i.e., \( \mathbb{F}\varphi \), as well as \( \mathbb{F}\varphi \), the corresponding branch in the new tableau is also closed. If a branch in the old tableau was closed because it contained the assumption \( \mathbb{T}\neg \varphi \) as well as \( \mathbb{F}\neg \varphi \) we can turn it into a closed branch by applying \( \neg \mathbb{F} \) to \( \mathbb{F}\neg \varphi \) to obtain \( \mathbb{T}\varphi \) .
Yes
Proposition 19.22. 1. Both \( \varphi \land \psi \vdash \varphi \) and \( \varphi \land \psi \vdash \psi \) .
Proof. 1. Both \( \{ \mathbb{F}\varphi ,\mathbb{T}\varphi \land \psi \} \) and \( \{ \mathbb{F}\psi ,\mathbb{T}\varphi \land \psi \} \) have closed tableaux\n\n1. \( \;\mathbb{F}\varphi \; \) Assumption\n\n2. \( \mathbb{T}\varphi \land \psi \; \) Assumption\n\n3. \( \mathbb{T}\varphi \; \land \mathbb{T}2 \)\n\n4. \( \mathbb{T}\psi \land \mathbb{T}2 \)\n\n\( \otimes \)
Yes
Proposition 19.23. 1. \( \varphi \vee \psi ,\neg \varphi ,\neg \psi \) is inconsistent.
Proof. 1. We give a closed tableau of \( \{ \mathbb{T}\varphi \vee \psi ,\mathbb{T}\neg \varphi ,\mathbb{T}\neg \psi \} \) :\n\n![c5962287-92b4-4003-ac67-b592d0231929_294_2.jpg](images/c5962287-92b4-4003-ac67-b592d0231929_294_2.jpg)
Yes
Proposition 19.24. 1. \( \varphi ,\varphi \rightarrow \psi \vdash \psi \) .
Proof. 1. \( \{ \mathbb{F}\psi ,\mathbb{T}\varphi \rightarrow \psi ,\mathbb{T}\varphi \} \) has a closed tableau:\n\n![c5962287-92b4-4003-ac67-b592d0231929_295_1.jpg](images/c5962287-92b4-4003-ac67-b592d0231929_295_1.jpg)
Yes
Theorem 19.25. If \( c \) is a constant not occurring in \( \Gamma \) or \( \varphi \left( x\right) \) and \( \Gamma \vdash \varphi \left( c\right) \), then \( \Gamma \vdash \forall {x\varphi }\left( x\right) \) .
Proof. Suppose \( \Gamma \vdash \varphi \left( c\right) \), i.e., there are \( {\psi }_{1},\ldots ,{\psi }_{n} \in \Gamma \) and a closed tableau for\n\n\[ \left\{ {\mathbb{F}\varphi \left( c\right) ,\mathbb{T}{\psi }_{1},\ldots ,\mathbb{T}{\psi }_{n}}\right\} \]\n\nWe have to show that there is also a closed tableau for\n\n\[ \left\{ {\mathbb{F}\forall {x\varphi }\left( x\right) ,\mathbb{T}{\psi }_{1},\ldots ,\mathbb{T}{\psi }_{n}}\right\} \]\n\nTake the closed tableau and replace the first assumption with \( \mathbb{F}\forall {x\varphi }\left( x\right) \), and insert \( \mathbb{F}\varphi \left( c\right) \) after the assumptions.\n\n![c5962287-92b4-4003-ac67-b592d0231929_296_0.jpg](images/c5962287-92b4-4003-ac67-b592d0231929_296_0.jpg)\n\nThe tableau is still closed, since all sentences available as assumptions before are still available at the top of the tableau. The inserted line is the result of a correct application of \( \forall \mathbb{F} \), since the constant symbol \( c \) does not occur in \( {\psi }_{1} \) , \( \ldots ,{\psi }_{n} \) of \( \forall {x\varphi }\left( x\right) \), i.e., it does not occur above the inserted line in the new tableau.
Yes
Proposition 19.26. 1. \( \varphi \left( t\right) \vdash \exists {x\varphi }\left( x\right) \) .
1. A closed tableau for \( \mathbb{F}\exists {x\varphi }\left( x\right) ,\mathbb{T}\varphi \left( t\right) \) is:\n\n\[ \begin{matrix} \text{ 1. } & \mathbb{F}\exists {x\varphi }\left( x\right) & \text{ Assumption } \\ \text{ 2. } & \mathbb{T}\varphi \left( t\right) & \text{ Assumption } \\ \text{ 3. } & \mathbb{F}\varphi \left( t\right) & \exists \mathbb{F}1 \\ & \otimes & \end{matrix} \]
Yes
Corollary 19.31. If \( \Gamma \) is satisfiable, then it is consistent.
Proof. We prove the contrapositive. Suppose that \( \Gamma \) is not consistent. Then there are \( {\psi }_{1},\ldots ,{\psi }_{n} \in \Gamma \) and a closed tableau for \( \{ \mathbb{T}\psi ,\ldots ,\mathbb{T}\psi \} \) . By Theorem 19.28, there is no \( \mathfrak{M} \) such that \( \mathfrak{M} \vDash {\psi }_{i} \) for all \( i = 1,\ldots, n \) . But then \( \Gamma \) is not satisfiable.
Yes
If \( s \) and \( t \) are closed terms, then \( s = t,\varphi \left( s\right) \vdash \varphi \left( t\right) \)
\[ \text{1.}\mathbb{F}\varphi \left( t\right) \;\text{Assumption} \] \[ \text{2.}\mathbb{T}s = t\;\text{Assumption} \] \[ \text{3.}\mathbb{T}\varphi \left( s\right) \;\text{Assumption} \] 4. \( \;\mathbb{T}\varphi \left( t\right) \; = \mathbb{T}2,3 \) \( \otimes \)
No
Proposition 19.33. Tableaux with rules for identity are sound: no closed tableau is satisfiable.
Proof. We just have to show as before that if a tableau has a satisfiable branch, the branch resulting from applying one of the rules for \( = \) to it is also satisfiable. Let \( \Gamma \) be the set of signed formulas on the branch, and let \( \mathfrak{M} \) be a structure satisfying \( \Gamma \) . Suppose the branch is expanded using \( = \), i.e., by adding the signed formula \( \mathbb{T}t = t \) . Trivially, \( \mathfrak{M} \vDash t = t \), so \( \mathfrak{M} \) also satisfies \( \Gamma \cup \{ \mathbb{T}t = t\} \) . If the branch is expanded using \( = \mathbb{T} \), we add a signed formula \( {S\varphi }\left( {t}_{2}\right) \) , but \( \Gamma \) contains both \( \mathbb{T}{t}_{1} = {t}_{2} \) and \( \mathbb{T}\varphi \left( {t}_{1}\right) \) . Thus we have \( \mathfrak{M} \vDash {t}_{1} = {t}_{2} \) and \( \mathfrak{M} \vDash \varphi \left( {t}_{1}\right) \) . Let \( s \) be a variable assignment with \( s\left( x\right) = {\operatorname{Val}}^{\mathfrak{M}}\left( {t}_{1}\right) \) . By Proposition \( {14.41},\mathfrak{M}, s \vDash \varphi \left( {t}_{1}\right) \) . Since \( s{ \sim }_{x}s \), by Proposition \( {14.46},\mathfrak{M}, s \vDash \varphi \left( x\right) \) . since \( \mathfrak{M} \vDash {t}_{1} = {t}_{2} \), we have \( {\operatorname{Val}}^{\mathfrak{M}}\left( {t}_{1}\right) = {\operatorname{Val}}^{\mathfrak{M}}\left( {t}_{2}\right) \), and hence \( s\left( x\right) = {\operatorname{Val}}^{\mathfrak{M}}\left( {t}_{2}\right) \) . By applying Proposition 14.46 again, we also have \( \mathfrak{M}, s \vDash \varphi \left( {t}_{2}\right) \) . By Proposition \( {14.41},\mathfrak{M} \vDash \varphi \left( {t}_{2}\right) \) . The case of \( = \mathbb{F} \) is treated similarly.
Yes
Suppose we want to prove \( \left( {\neg \theta \vee \alpha }\right) \rightarrow \left( {\theta \rightarrow \alpha }\right) \) .
Clearly, this is not an instance of any of our axioms, so we have to use the MP rule to derive it. Our only rule is MP, which given \( \varphi \) and \( \varphi \rightarrow \psi \) allows us to justify \( \psi \) . One strategy would be to use eq. (20.6) with \( \varphi \) being \( \neg \theta ,\psi \) being \( \alpha \), and \( \chi \) being \( \theta \rightarrow \alpha \), i.e., the instance\n\n\[ \left( {\neg \theta \rightarrow \left( {\theta \rightarrow \alpha }\right) }\right) \rightarrow \left( {\left( {\alpha \rightarrow \left( {\theta \rightarrow \alpha }\right) }\right) \rightarrow \left( {\left( {\neg \theta \vee \alpha }\right) \rightarrow \left( {\theta \rightarrow \alpha }\right) }\right) }\right) .\n\]\n\nWhy? Two applications of MP yield the last part, which is what we want. And we easily see that \( \neg \theta \rightarrow \left( {\theta \rightarrow \alpha }\right) \) is an instance of eq. (20.10), and \( \alpha \rightarrow \left( {\theta \rightarrow \alpha }\right) \) is an instance of eq. (20.7). So our derivation is:\n\n1. \( \neg \theta \rightarrow \left( {\theta \rightarrow \alpha }\right) \)\n\n2. \( \left( {\neg \theta \rightarrow \left( {\theta \rightarrow \alpha }\right) }\right) \rightarrow \)\n\n\[ \left( {\left( {\alpha \rightarrow \left( {\theta \rightarrow \alpha }\right) }\right) \rightarrow \left( {\left( {\neg \theta \vee \alpha }\right) \rightarrow \left( {\theta \rightarrow \alpha }\right) }\right) }\right) \;\text{ eq. (20.6) }\n\]\n\n3. \( \;(\left( {\alpha \rightarrow \left( {\theta \rightarrow \alpha }\right) }\right) \rightarrow \left( {\left( {\neg \theta \vee \alpha }\right) \rightarrow \left( {\theta \rightarrow \alpha }\right) }\right) \n\n4. \( \alpha \rightarrow \left( {\theta \rightarrow \alpha }\right) \; \) eq. (20.7)\n\n5. \( \left( {\neg \theta \vee \alpha }\right) \rightarrow \left( {\theta \rightarrow \alpha }\right) \)
Yes
Let’s try to find a derivation of \( \theta \rightarrow \theta \).
1. \( \theta \rightarrow \left( {\left( {\theta \rightarrow \theta }\right) \rightarrow \theta }\right) \) eq. (20.7)\n\n2. \( \left( {\left( {\theta \rightarrow \left( {\theta \rightarrow \theta }\right) }\right) \rightarrow \left( {\theta \rightarrow \theta }\right) }\right) \) eq. (20.8)\n\n3. \( \left( {\theta \rightarrow \left( {\theta \rightarrow \theta }\right) }\right) \rightarrow \left( {\theta \rightarrow \theta }\right) \) \( 1,2,\mathrm{{MP}} \)\n\n4. \( \theta \rightarrow \left( {\theta \rightarrow \theta }\right) \; \) eq. (20.7)\n\n5. \( \theta \rightarrow \theta \;3,4,\mathrm{{MP}} \)
Yes
Sometimes we want to show that there is a derivation of some formula from some other formulas \( \Gamma \) . For instance, let’s show that we can derive \( \varphi \rightarrow \chi \) from \( \Gamma = \{ \varphi \rightarrow \psi ,\psi \rightarrow \chi \} \) .
1. \( \varphi \rightarrow \psi \; \) HYP\n2. \( \psi \rightarrow \chi \;\mathrm{{HYP}} \)\n3. \( \left( {\psi \rightarrow \chi }\right) \rightarrow \left( {\varphi \rightarrow \left( {\psi \rightarrow \chi }\right) }\right) \) eq. (20.7)\n4. \( \varphi \rightarrow \left( {\psi \rightarrow \chi }\right) \;2,3,\mathrm{{MP}} \)\n\n\( \left( {\varphi \rightarrow \left( {\psi \rightarrow \chi }\right) }\right) \rightarrow \)\n\n\( \left( {\left( {\varphi \rightarrow \psi }\right) \rightarrow \left( {\varphi \rightarrow \chi }\right) }\right) \; \) eq. (20.8)\n6. \( \;\left( {\left( {\varphi \rightarrow \psi }\right) \rightarrow \left( {\varphi \rightarrow \chi }\right) }\right) \;4,5,\mathrm{{MP}} \)\n7. \( \varphi \rightarrow \chi \;1,6,\mathrm{{MP}} \)
Yes
Proposition 20.12. If \( \Gamma \vdash \varphi \rightarrow \psi \) and \( \Gamma \vdash \psi \rightarrow \chi \), then \( \Gamma \vdash \varphi \rightarrow \chi \)
Proof. Suppose \( \Gamma \vdash \varphi \rightarrow \psi \) and \( \Gamma \vdash \psi \rightarrow \chi \) . Then there is a derivation of \( \varphi \rightarrow \psi \) from \( \Gamma \) ; and a derivation of \( \psi \rightarrow \chi \) from \( \Gamma \) as well. Combine these into a single derivation by concatenating them. Now add lines 3-7 of the derivation in the preceding example. This is a derivation of \( \varphi \rightarrow \chi \) -which is the last line of the new derivation-from \( \Gamma \) . Note that the justifications of lines 4 and 7 remain valid if the reference to line number 2 is replaced by reference to the last line of the derivation of \( \varphi \rightarrow \psi \), and reference to line number 1 by reference to the last line of the derivation of \( B \rightarrow \chi \) .
Yes
Let us give a derivation of \( \left( {\forall {x\varphi }\left( x\right) \land \forall {y\psi }\left( y\right) }\right) \rightarrow \forall x(\varphi \left( x\right) \land \psi \left( x\right) ) \) .
First, note that\n\n\[ \left( {\forall {x\varphi }\left( x\right) \land \forall {y\psi }\left( y\right) }\right) \rightarrow \forall {x\varphi }\left( x\right) \]\n\nis an instance of eq. (20.1), and\n\n\[ \forall {x\varphi }\left( x\right) \rightarrow \varphi \left( a\right) \]\n\nof eq. (20.15). So, by Proposition 20.12, we know that\n\n\[ \left( {\forall {x\varphi }\left( x\right) \land \forall {y\psi }\left( y\right) }\right) \rightarrow \varphi \left( a\right) \]\n\nis derivable. Likewise, since\n\n\[ \left( {\forall {x\varphi }\left( x\right) \land \forall {y\psi }\left( y\right) }\right) \rightarrow \forall {y\psi }\left( y\right) \;\text{ and } \]\n\n\[ \forall {y\psi }\left( y\right) \rightarrow \psi \left( a\right) \]\n\nare instances of eq. (20.2) and eq. (20.15), respectively,\n\n\[ \left( {\forall {x\varphi }\left( x\right) \land \forall {y\psi }\left( y\right) }\right) \rightarrow \psi \left( a\right) \]\n\nis derivable by Proposition 20.12. Using an appropriate instance of eq. (20.3) and two applications of MP, we see that\n\n\[ \left( {\forall {x\varphi }\left( x\right) \land \forall {y\psi }\left( y\right) }\right) \rightarrow \left( {\varphi \left( a\right) \land \psi \left( a\right) }\right) \]\n\nis derivable. We can now apply QR to obtain\n\n\[ \left( {\forall {x\varphi }\left( x\right) \land \forall {y\psi }\left( y\right) }\right) \rightarrow \forall x\left( {\varphi \left( x\right) \land \psi \left( x\right) }\right) .\n\]
Yes
Proposition 20.17 (Reflexivity). If \( \varphi \in \Gamma \), then \( \Gamma \vdash \varphi \) .
Proof. The formula \( \varphi \) by itself is a derivation of \( \varphi \) from \( \Gamma \) .
Yes
Proposition 20.18 (Monotony). If \( \Gamma \subseteq \Delta \) and \( \Gamma \vdash \varphi \), then \( \Delta \vdash \varphi \) .
Proof. Any derivation of \( \varphi \) from \( \Gamma \) is also a derivation of \( \varphi \) from \( \Delta \) .
Yes
Proposition 20.19 (Transitivity). If \( \Gamma \vdash \varphi \) and \( \{ \varphi \} \cup \Delta \vdash \psi \), then \( \Gamma \cup \Delta \vdash \psi \) .
Proof. Suppose \( \{ \varphi \} \cup \Delta \vdash \psi \) . Then there is a derivation \( {\psi }_{1},\ldots ,{\psi }_{l} = \psi \) from \( \{ \varphi \} \cup \Delta \) . Some of the steps in that derivation will be correct because of a rule which refers to a prior line \( {\psi }_{i} = \varphi \) . By hypothesis, there is a derivation of \( \varphi \) from \( \Gamma \), i.e., a derivation \( {\varphi }_{1},\ldots ,{\varphi }_{k} = \varphi \) where every \( {\varphi }_{i} \) is an axiom, an element of \( \Gamma \), or correct by a rule of inference. Now consider the sequence\n\n\[ \n{\varphi }_{1},\ldots ,{\varphi }_{k} = \varphi ,{\psi }_{1},\ldots ,{\psi }_{l} = \psi .\n\]\n\nThis is a correct derivation of \( \psi \) from \( \Gamma \cup \Delta \) since every \( {B}_{i} = \varphi \) is now justified by the same rule which justifies \( {\varphi }_{k} = \varphi \) .
Yes
Proposition 20.21 (Compactness). 1. If \( \Gamma \vdash \varphi \) then there is a finite subset \( {\Gamma }_{0} \subseteq \Gamma \) such that \( {\Gamma }_{0} \vdash \varphi \).
1. If \( \Gamma \vdash \varphi \), then there is a finite sequence of formulas \( {\varphi }_{1},\ldots ,{\varphi }_{n} \) so that \( \varphi \equiv {\varphi }_{n} \) and each \( {\varphi }_{i} \) is either a logical axiom, an element of \( \Gamma \) or follows from previous formulas by modus ponens. Take \( {\Gamma }_{0} \) to be those \( {\varphi }_{i} \) which are in \( \Gamma \) . Then the derivation is likewise a derivation from \( {\Gamma }_{0} \) , and so \( {\Gamma }_{0} \vdash \varphi \).
Yes
Proposition 20.22. If \( \Gamma \vdash \varphi \) and \( \Gamma \vdash \varphi \rightarrow \psi \), then \( \Gamma \vdash \psi \) .
Proof. We have that \( \{ \varphi ,\varphi \rightarrow \psi \} \vdash \psi \) :\n\n1. \( \varphi \; \) Hyp.\n\n2. \( \varphi \rightarrow \psi \; \) Hyp.\n\n3. \( \psi \;1,2,\mathrm{{MP}} \)\n\nBy Proposition 20.19, \( \Gamma \vdash \psi \) .
Yes
Theorem 20.23 (Deduction Theorem). \( \Gamma \cup \{ \varphi \} \vdash \psi \) if and only if \( \Gamma \vdash \varphi \rightarrow \psi \) .
Proof. The \
No
Theorem 20.25 (Deduction Theorem). If \( \Gamma \cup \{ \varphi \} \vdash \psi \), then \( \Gamma \vdash \varphi \rightarrow \psi \) .
Proof. We again proceed by induction on the length of the derivation of \( \psi \) from \( \Gamma \cup \{ \varphi \} \). The proof of the induction basis is identical to that in the proof of Theorem 20.23. For the inductive step, suppose again that the derivation of \( \psi \) from \( \Gamma \cup \{ \varphi \} \) ends with a step \( \psi \) which is justified by an inference rule. If the inference rule is modus ponens, we proceed as in the proof of Theorem 20.23. If the inference rule is QR, we know that \( \psi \equiv \chi \rightarrow \forall {x\theta }\left( x\right) \) and a formula of the form \( \chi \rightarrow \theta \left( a\right) \) appears earlier in the derivation, where \( a \) does not occur in \( \chi ,\varphi \), or \( \Gamma \) . We thus have that \[ \Gamma \cup \{ \varphi \} \vdash \chi \rightarrow \theta \left( a\right) \] and the induction hypothesis applies, i.e., we have that \[ \Gamma \vdash \varphi \rightarrow \left( {\chi \rightarrow \theta \left( a\right) }\right) . \] By \[ \vdash \left( {\varphi \rightarrow \left( {\chi \rightarrow \theta \left( a\right) }\right) }\right) \rightarrow \left( {\left( {\varphi \land \chi }\right) \rightarrow \theta \left( a\right) }\right) \] and modus ponens we get \[ \Gamma \vdash \left( {\varphi \land \chi }\right) \rightarrow \theta \left( a\right) . \] Since the eigenvariable condition still applies, we can add a step to this derivation justified by QR, and get \[ \Gamma \vdash \left( {\varphi \land \chi }\right) \rightarrow \forall {x\theta }\left( x\right) \] We also have \[ \vdash \left( {\left( {\varphi \land \chi }\right) \rightarrow \forall {x\theta }\left( x\right) }\right) \rightarrow (\varphi \rightarrow \left( {\chi \rightarrow \forall {x\theta }\left( x\right) }\right) , \] so by modus ponens, \[ \Gamma \vdash \varphi \rightarrow \left( {\chi \rightarrow \forall {x\theta }\left( x\right) }\right) \] i.e., \( \Gamma \vdash \psi \) . We leave the case where \( \psi \) is justified by the rule QR, but is of the form \( \exists {x\theta }\left( x\right) \rightarrow \chi \), as an exercise.
No
Proposition 20.26. If \( \Gamma \vdash \varphi \) and \( \Gamma \cup \{ \varphi \} \) is inconsistent, then \( \Gamma \) is inconsistent.
Proof. If \( \Gamma \cup \{ \varphi \} \) is inconsistent, then \( \Gamma \cup \{ \varphi \} \vdash \bot \) . By Proposition 20.17, \( \Gamma \vdash \psi \) for every \( \psi \in \Gamma \) . Since also \( \Gamma \vdash \varphi \) by hypothesis, \( \Gamma \vdash \psi \) for every \( \psi \in \Gamma \cup \{ \varphi \} \) . By Proposition 20.19, \( \Gamma \vdash \bot \), i.e., \( \Gamma \) is inconsistent.
Yes
Proposition 20.27. \( \Gamma \vdash \varphi \) iff \( \Gamma \cup \{ \neg \varphi \} \) is inconsistent.
Proof. First suppose \( \Gamma \vdash \varphi \) . Then \( \Gamma \cup \{ \neg \varphi \} \vdash \varphi \) by Proposition 20.18. \( \Gamma \cup \) \( \{ \neg \varphi \} \vdash \neg \varphi \) by Proposition 20.17. We also have \( \vdash \neg \varphi \rightarrow \left( {\varphi \rightarrow \bot }\right) \) by eq. (20.10). So by two applications of Proposition 20.22, we have \( \Gamma \cup \{ \neg \varphi \} \vdash \bot \) . Now assume \( \Gamma \cup \{ \neg \varphi \} \) is inconsistent, i.e., \( \Gamma \cup \{ \neg \varphi \} \vdash \bot \) . By the deduction theorem, \( \Gamma \vdash \neg \varphi \rightarrow \bot .\Gamma \vdash \left( {\neg \varphi \rightarrow \bot }\right) \rightarrow \neg \neg \varphi \) by eq. (20.13), so \( \Gamma \vdash \neg \neg \varphi \) by Proposition 20.22. Since \( \Gamma \vdash \neg \neg \varphi \rightarrow \varphi \) (eq. (20.14)), we have \( \Gamma \vdash \varphi \) by Proposition 20.22 again.
Yes
Proposition 20.28. If \( \Gamma \vdash \varphi \) and \( \neg \varphi \in \Gamma \), then \( \Gamma \) is inconsistent.
Proof. \( \Gamma \vdash \neg \varphi \rightarrow \left( {\varphi \rightarrow \bot }\right) \) by eq. (20.10). \( \Gamma \vdash \bot \) by two applications of Proposition 20.22.
Yes
Proposition 20.29. If \( \Gamma \cup \{ \varphi \} \) and \( \Gamma \cup \{ \neg \varphi \} \) are both inconsistent, then \( \Gamma \) is inconsistent.
Proof. Exercise.
No
Proposition 20.31. 1. \( \varphi \vee \psi ,\neg \varphi ,\neg \psi \) is inconsistent.
Proof. 1. From eq. (20.9) we get \( \vdash \neg \varphi \rightarrow \left( {\varphi \rightarrow \bot }\right) \) and \( \vdash \neg \varphi \rightarrow \left( {\varphi \rightarrow \bot }\right) \) . So by the deduction theorem, we have \( \{ \neg \varphi \} \vdash \varphi \rightarrow \bot \) and \( \{ \neg \psi \} \vdash \psi \rightarrow \) \( \bot \) . From eq. (20.6) we get \( \{ \neg \varphi ,\neg \psi \} \vdash \left( {\varphi \vee \psi }\right) \rightarrow \bot \) . By the deduction theorem, \( \{ \varphi \vee \psi ,\neg \varphi ,\neg \psi \} \vdash \bot \) .
Yes
Proposition 20.32. 1. \( \varphi ,\varphi \rightarrow \psi \vdash \psi \) .
Proof. 1. We can derive:\n\n1. \( \varphi \;\mathrm{{HYP}} \)\n\n2. \( \varphi \rightarrow \psi \;\mathrm{{HYP}} \)\n\n3. \( \psi \;1,2,\mathrm{{MP}} \)
Yes
Theorem 20.33. If \( c \) is a constant symbol not occurring in \( \Gamma \) or \( \varphi \left( x\right) \) and \( \Gamma \vdash \varphi \left( c\right) \) , then \( \Gamma \vdash \forall {x\varphi }\left( x\right) \) .
Proof. By the deduction theorem, \( \Gamma \vdash \top \rightarrow \varphi \left( c\right) \) . Since \( c \) does not occur in \( \Gamma \) or \( \top \), we get \( \Gamma \vdash \top \rightarrow \varphi \left( c\right) \) . By the deduction theorem again, \( \Gamma \vdash \forall {x\varphi }\left( x\right) \) .
No
Proposition 20.34. 1. \( \varphi \left( t\right) \vdash \exists {x\varphi }\left( x\right) \) .
1. By eq. (20.16) and the deduction theorem.
No
Proposition 20.35. If \( \varphi \) is an axiom, then \( \mathfrak{M}, s \vDash \varphi \) for each structure \( \mathfrak{M} \) and assignment \( s \) .
Proof. We have to verify that all the axioms are valid. For instance, here is the case for eq. (20.15): suppose \( t \) is free for \( x \) in \( \varphi \), and assume \( \mathfrak{M}, s \vDash \forall {x\varphi } \) . Then by definition of satisfaction, for each \( {s}^{\prime }{ \sim }_{x}s \), also \( \mathfrak{M},{s}^{\prime } \vDash \varphi \), and in particular this holds when \( {s}^{\prime }\left( x\right) = {\operatorname{Val}}_{s}^{\mathfrak{M}}\left( t\right) \) . By Proposition 14.46, \( \mathfrak{M}, s \vDash \varphi \left\lbrack {t/x}\right\rbrack \) . This shows that \( \mathfrak{M}, s \vDash \left( {\forall {x\varphi } \rightarrow \varphi \left\lbrack {t/x}\right\rbrack }\right) \) .
Yes
Corollary 20.38. If \( \Gamma \) is satisfiable, then it is consistent.
Proof. We prove the contrapositive. Suppose that \( \Gamma \) is not consistent. Then \( \Gamma \vdash \bot \), i.e., there is a derivation of \( \bot \) from \( \Gamma \) . By Theorem 20.36, any structure \( \mathfrak{M} \) that satisfies \( \Gamma \) must satisfy \( \bot \) . Since \( \mathfrak{M} \mathrel{\text{\vDash \not{} }} \bot \) for every structure \( \mathfrak{M} \), no \( \mathfrak{M} \) can satisfy \( \Gamma \), i.e., \( \Gamma \) is not satisfiable.
Yes
Proposition 20.40. The axioms eq. (20.17) and eq. (20.18) are valid.
Proof. Exercise.
No
Proposition 20.42. If \( \Gamma \vdash \varphi \left( {t}_{1}\right) \) and \( \Gamma \vdash {t}_{1} = {t}_{2} \), then \( \Gamma \vdash \varphi \left( {t}_{2}\right) \) .
Proof. The formula\n\n\[ \left( {{t}_{1} = {t}_{2} \rightarrow \left( {\varphi \left( {t}_{1}\right) \rightarrow \varphi \left( {t}_{2}\right) }\right) }\right) \]\n\n is an instance of eq. (20.18). The conclusion follows by two applications of MP. \( ▱ \)
Yes
Proposition 21.2. Suppose \( \Gamma \) is complete and consistent. Then:\n\n1. If \( \Gamma \vdash \varphi \), then \( \varphi \in \Gamma \) .
Proof. Let us suppose for all of the following that \( \Gamma \) is complete and consistent.\n\n1. If \( \Gamma \vdash \varphi \), then \( \varphi \in \Gamma \).\n\nSuppose that \( \Gamma \vdash \varphi \) . Suppose to the contrary that \( \varphi \notin \Gamma \) . Since \( \Gamma \) is complete, \( \neg \varphi \in \Gamma \) . By Propositions 17.20 to 19.20 and 20.28, \( \Gamma \) is inconsistent. This contradicts the assumption that \( \Gamma \) is consistent. Hence, it cannot be the case that \( \varphi \notin \Gamma \), so \( \varphi \in \Gamma \) .
Yes
Lemma 21.6. Every consistent set \( \Gamma \) can be extended to a saturated consistent set \( {\Gamma }^{\prime } \) .
Proof. Given a consistent set of sentences \( \Gamma \) in a language \( \mathcal{L} \), expand the language by adding a denumerable set of new constant symbols to form \( {\mathcal{L}}^{\prime } \). By Proposition 21.3, \( \Gamma \) is still consistent in the richer language. Further, let \( {\theta }_{i} \) be as in Definition 21.5. Let\n\n\[ \n{\Gamma }_{0} = \Gamma \n\]\n\n\[ \n{\Gamma }_{n + 1} = {\Gamma }_{n} \cup \left\{ {\theta }_{n}\right\} \n\]\n\ni.e., \( {\Gamma }_{n + 1} = \Gamma \cup \left\{ {{\theta }_{0},\ldots ,{\theta }_{n}}\right\} \), and let \( {\Gamma }^{\prime } = \mathop{\bigcup }\limits_{n}{\Gamma }_{n}.{\Gamma }^{\prime } \) is clearly saturated.\n\nIf \( {\Gamma }^{\prime } \) were inconsistent, then for some \( n,{\Gamma }_{n} \) would be inconsistent (Exercise: explain why). So to show that \( {\Gamma }^{\prime } \) is consistent it suffices to show, by induction on \( n \), that each set \( {\Gamma }_{n} \) is consistent.\n\nThe induction basis is simply the claim that \( {\Gamma }_{0} = \Gamma \) is consistent, which is the hypothesis of the theorem. For the induction step, suppose that \( {\Gamma }_{n} \) is consistent but \( {\Gamma }_{n + 1} = {\Gamma }_{n} \cup \left\{ {\theta }_{n}\right\} \) is inconsistent. Recall that \( {\theta }_{n} \) is \( \exists {x}_{n}{\varphi }_{n}\left( {x}_{n}\right) \rightarrow \) \( {\varphi }_{n}\left( {c}_{n}\right) \), where \( {\varphi }_{n}\left( {x}_{n}\right) \) is a formula of \( {\mathcal{L}}^{\prime } \) with only the variable \( {x}_{n} \) free. By the way we’ve chosen the \( {c}_{n} \) (see Definition 21.5), \( {c}_{n} \) does not occur in \( {\varphi }_{n}\left( {x}_{n}\right) \) nor in \( {\Gamma }_{n} \).\n\nIf \( {\Gamma }_{n} \cup \left\{ {\theta }_{n}\right\} \) is inconsistent, then \( {\Gamma }_{n} \vdash \neg {\theta }_{n} \), and hence both of the following hold:\n\n\[ \n{\Gamma }_{n} \vdash \exists {x}_{n}{\varphi }_{n}\left( {x}_{n}\right) \;{\Gamma }_{n} \vdash \neg {\varphi }_{n}\left( {c}_{n}\right) \n\]\n\nSince \( {c}_{n} \) does not occur in \( {\Gamma }_{n} \) or in \( {\varphi }_{n}\left( {x}_{n}\right) \), Theorems \( {17.25},{18.25},{19.25} \) and 20.33 applies. From \( {\Gamma }_{n} \vdash \neg {\varphi }_{n}\left( {c}_{n}\right) \), we obtain \( {\Gamma }_{n} \vdash \forall {x}_{n}\neg {\varphi }_{n}\left( {x}_{n}\right) \). Thus we have that both \( {\Gamma }_{n} \vdash \exists {x}_{n}{\varphi }_{n}\left( {x}_{n}\right) \) and \( {\Gamma }_{n} \vdash \forall {x}_{n}\neg {\varphi }_{n}\left( {x}_{n}\right) \), so \( {\Gamma }_{n} \) itself is inconsistent. (Note that \( \forall {x}_{n}\neg {\varphi }_{n}\left( {x}_{n}\right) \vdash \neg \exists {x}_{n}{\varphi }_{n}\left( {x}_{n}\right) \).) Contradiction: \( {\Gamma }_{n} \) was supposed to be consistent. Hence \( {\Gamma }_{n} \cup \left\{ {\theta }_{n}\right\} \) is consistent.
Yes
Proposition 21.7. Suppose \( \Gamma \) is complete, consistent, and saturated.\n\n1. \( \exists {x\varphi }\left( x\right) \in \Gamma \) iff \( \varphi \left( t\right) \in \Gamma \) for at least one closed term \( t \) .
Proof. 1. First suppose that \( \exists {x\varphi }\left( x\right) \in \Gamma \) . Because \( \Gamma \) is saturated, \( (\exists {x\varphi }\left( x\right) \rightarrow \) \( \varphi \left( c\right) ) \in \Gamma \) for some constant symbol \( c \) . By Propositions 17.24 to 19.24 and 20.32, item (1), and Proposition 21.2(1), \( \varphi \left( c\right) \in \Gamma \) .\n\nFor the other direction, saturation is not necessary: Suppose \( \varphi \left( t\right) \in \Gamma \) . Then \( \Gamma \vdash \exists {x\varphi }\left( x\right) \) by Propositions 17.26 to 19.26 and 20.34, item (1). By Proposition 21.2(1), \( \exists {x\varphi }\left( x\right) \in \Gamma \) .
Yes
Lemma 21.8 (Lindenbaum’s Lemma). Every consistent set \( \Gamma \) in a language \( \mathcal{L} \) can be extended to a complete and consistent set \( {\Gamma }^{ * } \) .
Proof. Let \( \Gamma \) be consistent. Let \( {\varphi }_{0},{\varphi }_{1},\ldots \) be an enumeration of all the sentences of \( \mathcal{L} \) . Define \( {\Gamma }_{0} = \Gamma \), and\n\n\[ \n{\Gamma }_{n + 1} = \left\{ \begin{array}{ll} {\Gamma }_{n} \cup \left\{ {\varphi }_{n}\right\} & \text{ if }{\Gamma }_{n} \cup \left\{ {\varphi }_{n}\right\} \text{ is consistent; } \\ {\Gamma }_{n} \cup \left\{ {\neg {\varphi }_{n}}\right\} & \text{ otherwise. } \end{array}\right.\n\]\n\nLet \( {\Gamma }^{ * } = \mathop{\bigcup }\limits_{{n \geq 0}}{\Gamma }_{n} \) .\n\nEach \( {\Gamma }_{n} \) is consistent: \( {\Gamma }_{0} \) is consistent by definition. If \( {\Gamma }_{n + 1} = {\Gamma }_{n} \cup \left\{ {\varphi }_{n}\right\} \) , this is because the latter is consistent. If it isn’t, \( {\Gamma }_{n + 1} = {\Gamma }_{n} \cup \left\{ {\neg {\varphi }_{n}}\right\} \) . We have to verify that \( {\Gamma }_{n} \cup \left\{ {\neg {\varphi }_{n}}\right\} \) is consistent. Suppose it’s not. Then both \( {\Gamma }_{n} \cup \left\{ {\varphi }_{n}\right\} \) and \( {\Gamma }_{n} \cup \left\{ {\neg {\varphi }_{n}}\right\} \) are inconsistent. This means that \( {\Gamma }_{n} \) would be inconsistent by Propositions 17.20 to 19.20 and 20.28, contrary to the induction hypothesis.\n\nFor every \( n \) and every \( i < n,{\Gamma }_{i} \subseteq {\Gamma }_{n} \) . This follows by a simple induction on \( n \) . For \( n = 0 \), there are no \( i < 0 \), so the claim holds automatically. For the inductive step, suppose it is true for \( n \) . We have \( {\Gamma }_{n + 1} = {\Gamma }_{n} \cup \left\{ {\varphi }_{n}\right\} \) or \( = {\Gamma }_{n} \cup \left\{ {\neg {\varphi }_{n}}\right\} \) by construction. So \( {\Gamma }_{n} \subseteq {\Gamma }_{n + 1} \) . If \( i < n \), then \( {\Gamma }_{i} \subseteq {\Gamma }_{n} \) by inductive hypothesis, and so \( \subseteq {\Gamma }_{n + 1} \) by transitivity of \( \subseteq \) .\n\nFrom this it follows that every finite subset of \( {\Gamma }^{ * } \) is a subset of \( {\Gamma }_{n} \) for some \( n \), since each \( \psi \in {\Gamma }^{ * } \) not already in \( {\Gamma }_{0} \) is added at some stage \( i \) . If \( n \) is the last one of these, then all \( \psi \) in the finite subset are in \( {\Gamma }_{n} \) . So, every finite subset of \( {\Gamma }^{ * } \) is consistent. By Propositions 17.17 to 19.17 and 20.21, \( {\Gamma }^{ * } \) is consistent.\n\nEvery sentence of \( \operatorname{Frm}\left( \mathcal{L}\right) \) appears on the list used to define \( {\Gamma }^{ * } \) . If \( {\varphi }_{n} \notin {\Gamma }^{ * } \) , then that is because \( {\Gamma }_{n} \cup \left\{ {\varphi }_{n}\right\} \) was inconsistent. But then \( \neg {\varphi }_{n} \in {\Gamma }^{ * } \), so \( {\Gamma }^{ * } \) is complete.
Yes
1. \( \mathfrak{M}\left( {\Gamma }^{ * }\right) \vDash \exists {x\varphi }\left( x\right) \) iff \( \mathfrak{M} \vDash \varphi \left( t\right) \) for at least one term \( t \) .
1. By Proposition 14.42, \( \mathfrak{M}\left( {\Gamma }^{ * }\right) \vDash \exists {x\varphi }\left( x\right) \) iff for at least one variable assignment \( s,\mathfrak{M}\left( {\Gamma }^{ * }\right), s \vDash \varphi \left( x\right) \) . As \( \left| {\mathfrak{M}\left( {\Gamma }^{ * }\right) }\right| \) consists of the closed terms of \( \mathcal{L} \), this is the case iff there is at least one closed term \( t \) such that \( s\left( x\right) = t \) and \( \mathfrak{M}\left( {\Gamma }^{ * }\right), s \vDash \varphi \left( x\right) \) . By Proposition 14.46, \( \mathfrak{M}\left( {\Gamma }^{ * }\right), s \vDash \varphi \left( x\right) \) iff \( \mathfrak{M}\left( {\Gamma }^{ * }\right), s \vDash \varphi \left( t\right) \), where \( s\left( x\right) = t \) . By Proposition 14.41, \( \mathfrak{M}\left( {\Gamma }^{ * }\right), s \vDash \varphi \left( t\right) \) iff \( \mathfrak{M}\left( {\Gamma }^{ * }\right) \vDash \varphi \left( t\right) \), since \( \varphi \left( t\right) \) is a sentence.
Yes
Lemma 21.11 (Truth Lemma). Suppose \( \varphi \) does not contain \( = \) . Then \( \mathfrak{M}\left( {\Gamma }^{ * }\right) \vDash \varphi \) iff \( \varphi \in {\Gamma }^{ * } \) .
Proof. We prove both directions simultaneously, and by induction on \( \varphi \) .\n\n1. \( \varphi \equiv \bot : \mathfrak{M}\left( {\Gamma }^{ * }\right) \nvDash \bot \) by definition of satisfaction. On the other hand, \( \bot \notin {\Gamma }^{ * } \) since \( {\Gamma }^{ * } \) is consistent.\n\n2. \( \varphi \equiv R\left( {{t}_{1},\ldots ,{t}_{n}}\right) : \mathfrak{M}\left( {\Gamma }^{ * }\right) \vDash R\left( {{t}_{1},\ldots ,{t}_{n}}\right) \) iff \( \left\langle {{t}_{1},\ldots ,{t}_{n}}\right\rangle \in {R}^{\mathfrak{M}\left( {\Gamma }^{ * }\right) } \) (by the definition of satisfaction) iff \( R\left( {{t}_{1},\ldots ,{t}_{n}}\right) \in {\Gamma }^{ * } \) (by the construction of \( \mathfrak{M}\left( {\Gamma }^{ * }\right) \) ).\n\n3. \( \varphi \equiv \neg \psi : \mathfrak{M}\left( {\Gamma }^{ * }\right) \vDash \varphi \) iff \( \mathfrak{M}\left( {\Gamma }^{ * }\right) \mathrel{\text{\vDash \not{} }} \psi \) (by definition of satisfaction). By induction hypothesis, \( \mathfrak{M}\left( {\Gamma }^{ * }\right) \mathrel{\text{\vDash \not{} }} \psi \) iff \( \psi \notin {\Gamma }^{ * } \) . Since \( {\Gamma }^{ * } \) is consistent and complete, \( \psi \notin {\Gamma }^{ * } \) iff \( \neg \psi \in {\Gamma }^{ * } \).\n\n4. \( \varphi \equiv \psi \land \chi \) : exercise.\n\n5. \( \varphi \equiv \psi \vee \chi : \mathfrak{M}\left( {\Gamma }^{ * }\right) \vDash \varphi \) iff \( \mathfrak{M}\left( {\Gamma }^{ * }\right) \vDash \psi \) or \( \mathfrak{M}\left( {\Gamma }^{ * }\right) \vDash \chi \) (by definition of satisfaction) iff \( \psi \in {\Gamma }^{ * } \) or \( \chi \in {\Gamma }^{ * } \) (by induction hypothesis). This is the case iff \( \left( {\psi \vee \chi }\right) \in {\Gamma }^{ * } \) (by Proposition 21.2(3)).\n\n6. \( \varphi \equiv \psi \rightarrow \chi \) : exercise.\n\n7. \( \varphi \equiv \forall {x\psi }\left( x\right) \) : exercise.\n\n8. \( \varphi \equiv \exists {x\psi }\left( x\right) : \;\mathfrak{M}\left( {\Gamma }^{ * }\right) \vDash \varphi \) iff \( \mathfrak{M}\left( {\Gamma }^{ * }\right) \vDash \psi \left( t\right) \) for at least one term \( t \) (Proposition 21.10). By induction hypothesis, this is the case iff \( \psi \left( t\right) \in \) \( {\Gamma }^{ * } \) for at least one term \( t \) . By Proposition 21.7, this in turn is the case iff \( \exists {x\psi }\left( x\right) \in {\Gamma }^{ * } \) .
No
Proposition 21.13. The relation \( \approx \) has the following properties:\n\n1. \( \approx \) is reflexive.\n\n2. \( \approx \) is symmetric.\n\n3. \( \approx \) is transitive.\n\n4. If \( t \approx {t}^{\prime } \) , \( f \) is a function symbol, and \( {t}_{1},\ldots ,{t}_{i - 1},{t}_{i + 1},\ldots ,{t}_{n} \) are terms, then\n\n\[ f\left( {{t}_{1},\ldots ,{t}_{i - 1}, t,{t}_{i + 1},\ldots ,{t}_{n}}\right) \approx f\left( {{t}_{1},\ldots ,{t}_{i - 1},{t}^{\prime },{t}_{i + 1},\ldots ,{t}_{n}}\right) . \]\n\n5. If \( t \approx {t}^{\prime }, R \) is a predicate symbol, and \( {t}_{1},\ldots ,{t}_{i - 1},{t}_{i + 1},\ldots ,{t}_{n} \) are terms, then\n\n\[ R\left( {{t}_{1},\ldots ,{t}_{i - 1}, t,{t}_{i + 1},\ldots ,{t}_{n}}\right) \in {\Gamma }^{ * }\text{iff} \]\n\n\[ R\left( {{t}_{1},\ldots ,{t}_{i - 1},{t}^{\prime },{t}_{i + 1},\ldots ,{t}_{n}}\right) \in {\Gamma }^{ * }. \]
Proof. Since \( {\Gamma }^{ * } \) is consistent and complete, \( t = {t}^{\prime } \in {\Gamma }^{ * } \) iff \( {\Gamma }^{ * } \vdash t = {t}^{\prime } \) . Thus it is enough to show the following:\n\n1. \( {\Gamma }^{ * } \vdash t = t \) for all terms \( t \) .\n\n2. If \( {\Gamma }^{ * } \vdash t = {t}^{\prime } \) then \( {\Gamma }^{ * } \vdash {t}^{\prime } = t \) .\n\n3. If \( {\Gamma }^{ * } \vdash t = {t}^{\prime } \) and \( {\Gamma }^{ * } \vdash {t}^{\prime } = {t}^{\prime \prime } \), then \( {\Gamma }^{ * } \vdash t = {t}^{\prime \prime } \) .\n\n4. If \( {\Gamma }^{ * } \vdash t = {t}^{\prime } \), then\n\n\[ {\Gamma }^{ * } \vdash f\left( {{t}_{1},\ldots ,{t}_{i - 1}, t,{t}_{i + 1},,\ldots ,{t}_{n}}\right) = f\left( {{t}_{1},\ldots ,{t}_{i - 1},{t}^{\prime },{t}_{i + 1},\ldots ,{t}_{n}}\right) \]\n\nfor every \( n \) -place function symbol \( f \) and terms \( {t}_{1},\ldots ,{t}_{i - 1},{t}_{i + 1},\ldots ,{t}_{n} \) .\n\n5. If \( {\Gamma }^{ * } \vdash t = {t}^{\prime } \) and \( {\Gamma }^{ * } \vdash R\left( {{t}_{1},\ldots ,{t}_{i - 1}, t,{t}_{i + 1},\ldots ,{t}_{n}}\right) \), then \( {\Gamma }^{ * } \vdash R\left( {{t}_{1},\ldots ,{t}_{i - 1},{t}^{\prime },{t}_{i + 1},\ldots ,{t}_{n}}\right) \) for every \( n \) -place predicate symbol \( R \) and terms \( {t}_{1},\ldots ,{t}_{i - 1},{t}_{i + 1},\ldots ,{t}_{n} \) .
Yes
Proposition 21.16. \( \mathfrak{M}/ \approx \) is well defined, i.e., if \( {t}_{1},\ldots ,{t}_{n},{t}_{1}^{\prime },\ldots ,{t}_{n}^{\prime } \) are terms, and \( {t}_{i} \approx {t}_{i}^{\prime } \) then\n\n\[ \text{1.}{\left\lbrack f\left( {t}_{1},\ldots ,{t}_{n}\right) \right\rbrack }_{ \approx } = {\left\lbrack f\left( {t}_{1}^{\prime },\ldots ,{t}_{n}^{\prime }\right) \right\rbrack }_{ \approx }\text{, i.e.,} \]\n\n\[ f\left( {{t}_{1},\ldots ,{t}_{n}}\right) \approx f\left( {{t}_{1}^{\prime },\ldots ,{t}_{n}^{\prime }}\right) \]\nand\n\n2. \( \mathfrak{M} \vDash R\left( {{t}_{1},\ldots ,{t}_{n}}\right) \) iff \( \mathfrak{M} \vDash R\left( {{t}_{1}^{\prime },\ldots ,{t}_{n}^{\prime }}\right) \), i.e.,\n\n\[ R\left( {{t}_{1},\ldots ,{t}_{n}}\right) \in {\Gamma }^{ * }\text{iff}R\left( {{t}_{1}^{\prime },\ldots ,{t}_{n}^{\prime }}\right) \in {\Gamma }^{ * }\text{.} \]
Proof. Follows from Proposition 21.13 by induction on \( n \) .
No
Lemma 21.17. \( \mathfrak{M}/ \approx \vDash \varphi \) iff \( \varphi \in {\Gamma }^{ * } \) for all sentences \( \varphi \) .
Proof. By induction on \( \varphi \), just as in the proof of Lemma 21.11. The only case that needs additional attention is when \( \varphi \equiv t = {t}^{\prime } \). \n\n\[ \mathfrak{M}/ \approx \vDash t = {t}^{\prime }\text{iff}{\left\lbrack t\right\rbrack }_{ \approx } = {\left\lbrack {t}^{\prime }\right\rbrack }_{ \approx }\text{(by definition of}\mathfrak{M}/ \approx \text{)} \]\n\n\[ \text{iff}t \approx {t}^{\prime }\text{(by definition of}{\left\lbrack t\right\rbrack }_{ \approx }\text{)} \]\n\n\[ \text{iff}t = {t}^{\prime } \in {\Gamma }^{ * }\text{(by definition of} \approx \text{).} \]
Yes
Theorem 21.18 (Completeness Theorem). Let \( \Gamma \) be a set of sentences. If \( \Gamma \) is consistent, it is satisfiable.
Proof. Suppose \( \Gamma \) is consistent. By Lemma 21.6, there is a saturated consistent set \( {\Gamma }^{\prime } \supseteq \Gamma \) . By Lemma 21.8, there is a \( {\Gamma }^{ * } \supseteq {\Gamma }^{\prime } \) which is consistent and complete. Since \( {\Gamma }^{\prime } \subseteq {\Gamma }^{ * } \), for each formula \( \varphi \left( x\right) ,{\Gamma }^{ * } \) contains a sentence of the form \( \exists {x\varphi }\left( x\right) \rightarrow \varphi \left( c\right) \) and so \( {\Gamma }^{ * } \) is saturated. If \( \Gamma \) does not contain \( = \), then by Lemma 21.11, \( \mathfrak{M}\left( {\Gamma }^{ * }\right) \vDash \varphi \) iff \( \varphi \in {\Gamma }^{ * } \) . From this it follows in particular that for all \( \varphi \in \Gamma ,\mathfrak{M}\left( {\Gamma }^{ * }\right) \vDash \varphi \), so \( \Gamma \) is satisfiable. If \( \Gamma \) does contain \( = \), then by Lemma 21.17, for all sentences \( \varphi ,\mathfrak{M}/ \approx \vDash \varphi \) iff \( \varphi \in {\Gamma }^{ * } \) . In particular, \( \mathfrak{M}/ \approx \vDash \varphi \) for all \( \varphi \in \Gamma \), so \( \Gamma \) is satisfiable.
Yes
Corollary 21.19 (Completeness Theorem, Second Version). For all \( \Gamma \) and sentences \( \varphi \) : if \( \Gamma \vDash \varphi \) then \( \Gamma \vdash \varphi \) .
Proof. Note that the \( {\Gamma }^{\prime } \) s in Corollary 21.19 and Theorem 21.18 are universally quantified. To make sure we do not confuse ourselves, let us restate Theorem 21.18 using a different variable: for any set of sentences \( \Delta \), if \( \Delta \) is consistent, it is satisfiable. By contraposition, if \( \Delta \) is not satisfiable, then \( \Delta \) is inconsistent. We will use this to prove the corollary.\n\nSuppose that \( \Gamma \vDash \varphi \) . Then \( \Gamma \cup \{ \neg \varphi \} \) is unsatisfiable by Proposition 14.51. Taking \( \Gamma \cup \{ \neg \varphi \} \) as our \( \Delta \), the previous version of Theorem 21.18 gives us that \( \Gamma \cup \{ \neg \varphi \} \) is inconsistent. By Propositions 17.19 to 19.19 and 20.27, \( \Gamma \vdash \varphi \) .
Yes
Theorem 21.21 (Compactness Theorem). The following hold for any sentences \( \Gamma \) and \( \varphi \) :\n\n1. \( \Gamma \vDash \varphi \) iff there is a finite \( {\Gamma }_{0} \subseteq \Gamma \) such that \( {\Gamma }_{0} \vDash \varphi \) .\n\n2. \( \Gamma \) is satisfiable if and only if it is finitely satisfiable.
Proof. We prove (2). If \( \Gamma \) is satisfiable, then there is a structure \( \mathfrak{M} \) such that \( \mathfrak{M} \vDash \varphi \) for all \( \varphi \in \Gamma \) . Of course, this \( \mathfrak{M} \) also satisfies every finite subset of \( \Gamma \) , so \( \Gamma \) is finitely satisfiable.\n\nNow suppose that \( \Gamma \) is finitely satisfiable. Then every finite subset \( {\Gamma }_{0} \subseteq \Gamma \) is satisfiable. By soundness (Corollaries 18.29, 17.31, 19.31 and 20.38), every finite subset is consistent. Then \( \Gamma \) itself must be consistent by Propositions 17.17 to 19.17 and 20.21. By completeness (Theorem 21.18), since \( \Gamma \) is consistent, it is satisfiable.
Yes
In every model \( \mathfrak{M} \) of a theory \( \Gamma \), each term \( t \) of course picks out an element of \( \left| \mathfrak{M}\right| \) . Can we guarantee that it is also true that every element of \( \left| \mathfrak{M}\right| \) is picked out by some term or other? In other words, are there theories \( \Gamma \) all models of which are covered?
The compactness theorem shows that this is not the case if \( \Gamma \) has infinite models. Here’s how to see this: Let \( \mathfrak{M} \) be an infinite model of \( \Gamma \), and let \( c \) be a constant symbol not in the language of \( \Gamma \) . Let \( \Delta \) be the set of all sentences \( c \neq t \) for \( t \) a term in the language \( \mathcal{L} \) of \( \Gamma \), i.e., \[ \Delta = \{ c \neq t : t \in \operatorname{Trm}\left( \mathcal{L}\right) \} \] A finite subset of \( \Gamma \cup \Delta \) can be written as \( {\Gamma }^{\prime } \cup {\Delta }^{\prime } \), with \( {\Gamma }^{\prime } \subseteq \Gamma \) and \( {\Delta }^{\prime } \subseteq \Delta \) . Since \( {\Delta }^{\prime } \) is finite, it can contain only finitely many terms. Let \( a \in \left| \mathfrak{M}\right| \) be an element of \( \left| \mathfrak{M}\right| \) not picked out by any of them, and let \( {\mathfrak{M}}^{\prime } \) be the structure that is just like \( \mathfrak{M} \), but also \( {c}^{{\mathfrak{M}}^{\prime }} = a \) . Since \( a \neq {\operatorname{Val}}^{\mathfrak{M}}\left( t\right) \) for all \( t \) occuring in \( {\Delta }^{\prime },{\mathfrak{M}}^{\prime } \vDash {\Delta }^{\prime } \) . Since \( \mathfrak{M} \vDash \Gamma ,{\Gamma }^{\prime } \subseteq \Gamma \), and \( c \) does not occur in \( \Gamma \), also \( {\mathfrak{M}}^{\prime } \vDash {\Gamma }^{\prime } \) . Together, \( {\mathfrak{M}}^{\prime } \vDash {\Gamma }^{\prime } \cup {\Delta }^{\prime } \) for every finite subset \( {\Gamma }^{\prime } \cup {\Delta }^{\prime } \) of \( \Gamma \cup \Delta \) . So every finite subset of \( \Gamma \cup \Delta \) is satisfiable. By compactness, \( \Gamma \cup \Delta \) itself is satisfiable. So there are models \( \mathfrak{M} \vDash \Gamma \cup \Delta \) . Every such \( \mathfrak{M} \) is a model of \( \Gamma \), but is not covered, since \( {\operatorname{Val}}^{\mathfrak{M}}\left( c\right) \neq {\operatorname{Val}}^{\mathfrak{M}}\left( t\right) \) for all terms \( t \) of \( \mathcal{L} \) .
Yes
Consider a language \( \mathcal{L} \) containing the predicate symbol \( < \) , constant symbols \( 0,1 \), and function symbols \( + , \times , - , \div \) . Let \( \Gamma \) be the set of all sentences in this language true in \( \mathfrak{Q} \) with domain \( \mathbb{Q} \) and the obvious interpretations. \( \Gamma \) is the set of all sentences of \( \mathcal{L} \) true about the rational numbers. Of course, in \( \mathbb{Q} \) (and even in \( \mathbb{R} \) ), there are no numbers which are greater than 0 but less than \( 1/k \) for all \( k \in {\mathbb{Z}}^{ + } \) . Such a number, if it existed, would be an infinitesimal: non-zero, but infinitely small. The compactness theorem shows that there are models of \( \Gamma \) in which infinitesimals exist.
Let \( \Delta \) be \( \{ 0 < \) \( c\} \cup \left\{ {c < \left( {1 \div \bar{k}}\right) : k \in {\mathbb{Z}}^{ + }}\right\} \) (where \( \bar{k} = \left( {1 + \left( {1 + \cdots + \left( {1 + 1}\right) \ldots }\right) }\right) \) with \( k \) 1’s). For any finite subset \( {\Delta }_{0} \) of \( \Delta \) there is a \( K \) such that all the sentences \( c < \left( {1 \div \bar{k}}\right) \) in \( {\Delta }_{0} \) have \( k < K \) . If we expand \( \mathfrak{Q} \) to \( {\mathfrak{Q}}^{\prime } \) with \( {c}^{{\mathfrak{Q}}^{\prime }} = 1/K \) we have that \( {\Omega }^{\prime } \vDash \Gamma \cup {\Delta }_{0} \), and so \( \Gamma \cup \Delta \) is finitely satisfiable (Exercise: prove this in detail). By compactness, \( \Gamma \cup \Delta \) is satisfiable. Any model \( \mathfrak{S} \) of \( \Gamma \cup \Delta \) contains an infinitesimal, namely \( {c}^{\mathfrak{S}} \) .
No
Theorem 21.29 (Compactness). \( \Gamma \) is satisfiable if and only if it is finitely satisfiable.
Proof. If \( \Gamma \) is satisfiable, then there is a structure \( \mathfrak{M} \) such that \( \mathfrak{M} \vDash \varphi \) for all \( \varphi \in \Gamma \) . Of course, this \( \mathfrak{M} \) also satisfies every finite subset of \( \Gamma \), so \( \Gamma \) is finitely satisfiable.\n\nNow suppose that \( \Gamma \) is finitely satisfiable. By Lemma 21.26, there is a finitely satisfiable, saturated set \( {\Gamma }^{\prime } \supseteq \Gamma \) . By Lemma 21.28, \( {\Gamma }^{\prime } \) can be extended to a complete and finitely satisfiable set \( {\Gamma }^{ * } \), and \( {\Gamma }^{ * } \) is still saturated. Construct the term model \( \mathfrak{M}\left( {\Gamma }^{ * }\right) \) as in Definition 21.9. Note that Proposition 21.10 did not rely on the fact that \( {\Gamma }^{ * } \) is consistent (or complete or saturated, for that matter), but just on the fact that \( \mathfrak{M}\left( {\Gamma }^{ * }\right) \) is covered. The proof of the Truth Lemma (Lemma 21.11) goes through if we replace references to Proposition 21.2 and Proposition 21.7 by references to Proposition 21.25 and Proposition 21.27
Yes
Theorem 21.30. If \( \Gamma \) is consistent then it has an enumerable model, i.e., it is satisfiable in a structure whose domain is either finite or denumerable.
Proof. If \( \Gamma \) is consistent, the structure \( \mathfrak{M} \) delivered by the proof of the completeness theorem has a domain \( \left| \mathfrak{M}\right| \) that is no larger than the set of the terms of the language \( \mathcal{L} \) . So \( \mathfrak{M} \) is at most denumerable.
Yes
Theorem 21.31. If \( \Gamma \) is a consistent set of sentences in the language of first-order logic without identity, then it has a denumerable model, i.e., it is satisfiable in a structure whose domain is infinite and enumerable.
Proof. If \( \Gamma \) is consistent and contains no sentences in which identity appears, then the structure \( \mathfrak{M} \) delivered by the proof of the completness theorem has a domain \( \left| \mathfrak{M}\right| \) identical to the set of terms of the language \( {\mathcal{L}}^{\prime } \) . So \( \mathfrak{M} \) is denumerable, since \( \operatorname{Trm}\left( {\mathcal{L}}^{\prime }\right) \) is.
Yes
Theorem 22.1. There are irrational numbers \( a \) and \( b \) such that \( {a}^{b} \) is rational.
Proof. Consider \( {\sqrt{2}}^{\sqrt{2}} \) . If this is rational, we are done: we can let \( a = b = \sqrt{2} \) . Otherwise, it is irrational. Then we have\n\n\[ \n{\left( {\sqrt{2}}^{\sqrt{2}}\right) }^{\sqrt{2}} = {\sqrt{2}}^{\sqrt{2} \cdot \sqrt{2}} = {\sqrt{2}}^{2} = 2, \n\]\n\nwhich is certainly rational. So, in this case, let \( a \) be \( {\sqrt{2}}^{\sqrt{2}} \), and let \( b \) be \( \sqrt{2} \) .
Yes
Proposition 23.2. If an \( \mathcal{L} \) -structure \( \mathfrak{M} \) is a reduct of an \( {\mathcal{L}}^{\prime } \) -structure \( {\mathfrak{M}}^{\prime } \), then for all \( \mathcal{L} \) -sentences \( \varphi \) , \[ \mathfrak{M} \vDash \varphi \text{iff}{\mathfrak{M}}^{\prime } \vDash \varphi \text{.} \]
Proof. Exercise.
No
Theorem 23.5. If a set \( \Gamma \) of sentences has arbitrarily large finite models, then it has an infinite model.
Proof. Expand the language of \( \Gamma \) by adding countably many new constants \( {c}_{0} \) , \( {c}_{1},\ldots \) and consider the set \( \Gamma \cup \left\{ {{c}_{i} \neq {c}_{j} : i \neq j}\right\} \) . To say that \( \Gamma \) has arbitrarily large finite models means that for every \( m > 0 \) there is \( n \geq m \) such that \( \Gamma \) has a model of cardinality \( n \) . This implies that \( \Gamma \cup \left\{ {{c}_{i} \neq {c}_{j} : i \neq j}\right\} \) is finitely satisfiable. By compactness, \( \Gamma \cup \left\{ {{c}_{i} \neq {c}_{j} : i \neq j}\right\} \) has a model \( \mathfrak{M} \) whose domain must be infinite, since it satisfies all inequalities \( {c}_{i} \neq {c}_{j} \) .
Yes
Proposition 23.6. There is no sentence \( \varphi \) of any first-order language that is true in a structure \( \mathfrak{M} \) if and only if the domain \( \left| \mathfrak{M}\right| \) of the structure is infinite.
Proof. If there were such a \( \varphi \), its negation \( \neg \varphi \) would be true in all and only the finite structures, and it would therefore have arbitrarily large finite models but it would lack an infinite model, contradicting Theorem 23.5.
Yes
Proposition 23.12. For any \( \mathfrak{M},\operatorname{Th}\left( \mathfrak{M}\right) \) is complete.
Proof. For any sentence \( \varphi \) either \( \mathfrak{M} \vDash \varphi \) or \( \mathfrak{M} \vDash \neg \varphi \), so either \( \varphi \in \operatorname{Th}\left( \mathfrak{M}\right) \) or \( \neg \varphi \in \operatorname{Th}\left( \mathfrak{M}\right) \) .
Yes
Proposition 23.13. If \( \mathfrak{N} \vDash \varphi \) for every \( \varphi \in \operatorname{Th}\left( \mathfrak{M}\right) \), then \( \mathfrak{M} \equiv \mathfrak{N} \) .
Proof. Since \( \mathfrak{N} \vDash \varphi \) for all \( \varphi \in \operatorname{Th}\left( \mathfrak{M}\right) ,\operatorname{Th}\left( \mathfrak{M}\right) \subseteq \operatorname{Th}\left( \mathfrak{N}\right) \) . If \( \mathfrak{N} \vDash \varphi \), then \( \mathfrak{N} \mathrel{\text{\vDash \not{} }} \neg \varphi \), so \( \neg \varphi \notin \operatorname{Th}\left( \mathfrak{M}\right) \) . Since \( \operatorname{Th}\left( \mathfrak{M}\right) \) is complete, \( \varphi \in \operatorname{Th}\left( \mathfrak{M}\right) \) . So, \( \operatorname{Th}\left( \mathfrak{N}\right) \subseteq \) \( \operatorname{Th}\left( \mathfrak{M}\right) \), and we have \( \mathfrak{M} \equiv \mathfrak{N} \) .
Yes
Theorem 23.16. If \( \mathfrak{M}{ \simeq }_{p}\mathfrak{N} \) and \( \mathfrak{M} \) and \( \mathfrak{N} \) are enumerable, then \( \mathfrak{M} \simeq \mathfrak{N} \) .
Proof. Since \( \mathfrak{M} \) and \( \mathfrak{N} \) are enumerable, let \( \left| \mathfrak{M}\right| = \left\{ {{a}_{0},{a}_{1},\ldots }\right\} \) and \( \left| \mathfrak{N}\right| = \left\{ {{b}_{0},{b}_{1},\ldots }\right\} \) . Starting with an arbitrary \( {p}_{0} \in I \), we define an increasing sequence of partial isomorphisms \( {p}_{0} \subseteq {p}_{1} \subseteq {p}_{2} \subseteq \cdots \) as follows:\n\n1. if \( n + 1 \) is odd, say \( n = {2r} \), then using the Forth property find a \( {p}_{n + 1} \in I \) such that \( {p}_{n} \subseteq {p}_{n + 1} \) and \( {a}_{r} \) is in the domain of \( {p}_{n + 1} \) ;\n\n2. if \( n + 1 \) is even, say \( n + 1 = {2r} \), then using the Back property find a \( {p}_{n + 1} \in I \) such that \( {p}_{n} \subseteq {p}_{n + 1} \) and \( {b}_{r} \) is in the range of \( {p}_{n + 1} \) .\n\nIf we now put:\n\n\[ p = \mathop{\bigcup }\limits_{{n \geq 0}}{p}_{n} \]\n\nwe have that \( p \) is a an isomorphism between \( \mathfrak{M} \) and \( \mathfrak{N} \) .
Yes
Theorem 23.17. Suppose \( \mathfrak{M} \) and \( \mathfrak{N} \) are structures for a purely relational language (a language containing only predicate symbols, and no function symbols or constants). Then if \( \mathfrak{M}{ \simeq }_{p}\mathfrak{N} \), also \( \mathfrak{M} \equiv \mathfrak{N} \) .
Proof. By induction on formulas, one shows that if \( {a}_{1},\ldots ,{a}_{n} \) and \( {b}_{1},\ldots ,{b}_{n} \) are such that there is a partial isomorphism \( p \) mapping each \( {a}_{i} \) to \( {b}_{i} \) and \( {s}_{1}\left( {x}_{i}\right) = {a}_{i} \) and \( {s}_{2}\left( {x}_{i}\right) = {b}_{i} \) (for \( i = 1,\ldots, n \) ), then \( \mathfrak{M},{s}_{1} \vDash \varphi \) if and only if \( \mathfrak{N},{s}_{2} \vDash \varphi \) . The case for \( n = 0 \) gives \( \mathfrak{M} \equiv \mathfrak{N} \) .
Yes
Proposition 23.19. Let \( \mathcal{L} \) be a finite purely relational language, i.e., a language containing finitely many predicate symbols and constant symbols, and no function symbols. Then for each \( n \in \mathbb{N} \) there are only finitely many first-order sentences in the language \( \mathcal{L} \) that have quantifier rank no greater than \( n \), up to logical equivalence.
Proof. By induction on \( n \) .
No
Theorem 23.23. Let \( \mathcal{L} \) be a purely relational language. Then \( {I}_{n}\left( {\mathbf{a},\mathbf{b}}\right) \) implies that for every \( \varphi \) such that \( \operatorname{qr}\left( \varphi \right) \leq n \), we have \( \mathfrak{M},\mathbf{a} \vDash \varphi \) if and only if \( \mathfrak{N},\mathbf{b} \vDash \varphi \) (where again a satisfies \( \varphi \) if any \( s \) such that \( s\left( {x}_{i}\right) = {a}_{i} \) satisfies \( \varphi \) ). Moreover, if \( \mathcal{L} \) is finite, the converse also holds.
Proof. The proof that \( {I}_{n}\left( {\mathbf{a},\mathbf{b}}\right) \) implies that \( \mathbf{a} \) and \( \mathbf{b} \) satisfy the same formulas of quantifier rank no greater than \( n \) is by an easy induction on \( \varphi \) . For the converse we proceed by induction on \( n \), using Proposition 23.19, which ensures that for each \( n \) there are at most finitely many non-equivalent formulas of that quantifier rank.\n\nFor \( n = 0 \) the hypothesis that \( \mathbf{a} \) and \( \mathbf{b} \) satisfy the same quantifier-free formulas gives that they satisfy the same atomic ones, so that \( {I}_{0}\left( {\mathbf{a},\mathbf{b}}\right) \).\n\nFor the \( n + 1 \) case, suppose that \( \mathbf{a} \) and \( \mathbf{b} \) satisfy the same formulas of quantifier rank no greater than \( n + 1 \) ; in order to show that \( {I}_{n + 1}\left( {\mathbf{a},\mathbf{b}}\right) \) suffices to show that for each \( a \in \left| \mathfrak{M}\right| \) there is a \( b \in \left| \mathfrak{N}\right| \) such that \( {I}_{n}\left( {\mathbf{a}a,\mathbf{b}b}\right) \), and by the inductive hypothesis again suffices to show that for each \( a \in \left| \mathfrak{M}\right| \) there is a \( b \in \left| \mathfrak{N}\right| \) such that \( \mathbf{a}a \) and \( \mathbf{b}b \) satisfy the same formulas of quantifier rank no greater than \( n \) .\n\nGiven \( a \in \left| \mathfrak{M}\right| \), let \( {\tau }_{n}^{a} \) be set of formulas \( \psi \left( {x,\mathbf{y}}\right) \) of rank no greater than \( n \) satisfied by \( \mathbf{a}a \) in \( \mathfrak{M};{\tau }_{n}^{a} \) is finite, so we can assume it is a single first-order formula. It follows that a satisfies \( \exists x{\tau }_{n}^{a}\left( {x,\mathbf{y}}\right) \), which has quantifier rank no greater than \( n + 1 \) . By hypothesis \( \mathbf{b} \) satisfies the same formula in \( \mathfrak{N} \), so that there is a \( b \in \left| \mathfrak{N}\right| \) such that \( \mathbf{b}b \) satisfies \( {\tau }_{n}^{a} \) ; in particular, \( \mathbf{b}b \) satisfies the same formulas of quantifier rank no greater than \( n \) as a \( a \) . Similarly one shows that for every \( b \in \left| \mathfrak{N}\right| \) there is \( a \in \left| \mathfrak{M}\right| \) such that \( \mathbf{a}a \) and \( \mathbf{b}b \) satisfy the same formulas of quantifier rank no greater than \( n \), which completes the proof.
Yes
Theorem 23.26. Any two enumerable dense linear orderings without endpoints are isomorphic.
Proof. Let \( {\mathfrak{M}}_{1} \) and \( {\mathfrak{M}}_{2} \) be enumerable dense linear orderings without endpoints, with \( { < }_{1} = { < }^{{\mathfrak{M}}_{1}} \) and \( { < }_{2} = { < }^{{\mathfrak{M}}_{2}} \), and let \( \mathcal{I} \) be the set of all partial isomorphisms between them. \( \mathcal{I} \) is not empty since at least \( \varnothing \in \mathcal{I} \) . We show that \( \mathcal{I} \) satisfies the Back-and-Forth property. Then \( {\mathfrak{M}}_{1}{ \simeq }_{p}{\mathfrak{M}}_{2} \), and the theorem follows by Theorem 23.16.\n\nTo show \( \mathcal{I} \) satisifes the Forth property, let \( p \in \mathcal{I} \) and let \( p\left( {a}_{i}\right) = {b}_{i} \) for \( i = 1 \) , \( \ldots, n \), and without loss of generality suppose \( {a}_{1}{ < }_{1}{a}_{2}{ < }_{1}\cdots { < }_{1}{a}_{n} \) . Given \( a \in \left| {\mathfrak{M}}_{1}\right| \), find \( b \in \left| {\mathfrak{M}}_{2}\right| \) as follows:\n\n1. if \( a{ < }_{2}{a}_{1} \) let \( b \in \left| {\mathfrak{M}}_{2}\right| \) be such that \( b{ < }_{2}{b}_{1} \) ;\n\n2. if \( {a}_{n}{ < }_{1}a \) let \( b \in \left| {\mathfrak{M}}_{2}\right| \) be such that \( {b}_{n}{ < }_{2}b \) ;\n\n3. if \( {a}_{i}{ < }_{1}a{ < }_{1}{a}_{i + 1} \) for some \( i \), then let \( b \in \left| {\mathfrak{M}}_{2}\right| \) be such that \( {b}_{i}{ < }_{2}b{ < }_{2} \) \( {b}_{i + 1} \) .\n\nIt is always possible to find a \( b \) with the desired property since \( {\mathfrak{M}}_{2} \) is a dense linear ordering without endpoints. Define \( q = p \cup \{ \langle a, b\rangle \} \) so that \( q \in \mathcal{I} \) is the desired extension of \( p \) . This establishes the Forth property. The Back property is similar. So \( {\mathfrak{M}}_{1}{ \simeq }_{p}{\mathfrak{M}}_{2} \) ; by Theorem 23.16, \( {\mathfrak{M}}_{1} \simeq {\mathfrak{M}}_{2} \).
Yes
Proposition 24.2. If a structure \( \mathfrak{M} \) standard, its domain is the set of values of the standard numerals, i.e., \[ \left| \mathfrak{M}\right| = \left\{ {{\operatorname{Val}}^{\mathfrak{M}}\left( \bar{n}\right) : n \in \mathbb{N}}\right\} \]
Proof. Clearly, every \( {\operatorname{Val}}^{\mathfrak{M}}\left( \bar{n}\right) \in \left| \mathfrak{M}\right| \) . We just have to show that every \( x \in \) \( \left| \mathfrak{M}\right| \) is equal to \( {\operatorname{Val}}^{\mathfrak{M}}\left( \bar{n}\right) \) for some \( n \) . Since \( \mathfrak{M} \) is standard, it is isomorphic to \( \mathfrak{N} \) . Suppose \( g : \mathbb{N} \rightarrow \left| \mathfrak{M}\right| \) is an isomorphism. Then \( g\left( n\right) = g\left( {{\operatorname{Val}}^{\mathfrak{N}}\left( \bar{n}\right) }\right) = \) \( {\operatorname{Val}}^{\mathfrak{M}}\left( \bar{n}\right) \) . But for every \( x \in \left| \mathfrak{M}\right| \), there is an \( n \in \mathbb{N} \) such that \( g\left( n\right) = x \), since \( g \) is surjective.
Yes
Proposition 24.4. If \( \mathfrak{M} \) is standard, then \( g \) from the proof of Proposition 24.3 is the only isomorphism from \( \mathfrak{N} \) to \( \mathfrak{M} \) .
Proof. Suppose \( h : \mathbb{N} \rightarrow \left| \mathfrak{M}\right| \) is an isomorphism between \( \mathfrak{N} \) and \( \mathfrak{M} \) . We show that \( g = h \) by induction on \( n \) . If \( n = 0 \), then \( g\left( 0\right) = {\mathrm{o}}^{\mathfrak{M}} \) by definition of \( g \) . But since \( h \) is an isomorphism, \( h\left( 0\right) = h\left( {\mathrm{o}}^{\mathfrak{N}}\right) = {\mathrm{o}}^{\mathfrak{M}} \), so \( g\left( 0\right) = h\left( 0\right) \) . Now consider the case for \( n + 1 \) . We have \[ g\left( {n + 1}\right) = {\operatorname{Val}}^{\mathfrak{M}}\left( \overline{n + 1}\right) \text{by definition of}g \] \[ = {\operatorname{Val}}^{\mathfrak{M}}\left( {\bar{n}}^{\prime }\right) \text{since}\overline{n + 1} \equiv {\bar{n}}^{\prime } \] \[ = {\prime }^{\mathfrak{M}}\left( {{\operatorname{Val}}^{\mathfrak{M}}\left( \bar{n}\right) }\right) \text{by definition of}{\operatorname{Val}}^{\mathfrak{M}}\left( {t}^{\prime }\right) \] \[ = {\prime }^{\mathfrak{M}}\left( {g\left( n\right) }\right) \text{by definition of}g \] \[ = {\prime }^{\mathfrak{M}}\left( {h\left( n\right) }\right) \text{by induction hypothesis} \] \[ = h\left( {{\prime }^{\mathfrak{N}}\left( n\right) }\right) \text{since}h\text{is an isomorphism} \] \[ = h\left( {n + 1}\right) \]
Yes
Proposition 24.6. If a structure \( \mathfrak{M} \) for \( {\mathcal{L}}_{A} \) contains a non-standard number, \( \mathfrak{M} \) is non-standard.
Proof. Suppose not, i.e., suppose \( \mathfrak{M} \) standard but contains a non-standard number \( x \) . Let \( g : \mathbb{N} \rightarrow \left| \mathfrak{M}\right| \) be an isomorphism. It is easy to see (by induction on \( n \) ) that \( g\left( {{\operatorname{Val}}^{\mathfrak{N}}\left( \bar{n}\right) }\right) = {\operatorname{Val}}^{\mathfrak{M}}\left( \bar{n}\right) \) . In other words, \( g \) maps standard numbers of \( \mathfrak{N} \) to standard numbers of \( \mathfrak{M} \) . If \( \mathfrak{M} \) contains a non-standard number, \( g \) cannot be surjective, contrary to hypothesis.
Yes
Proposition 24.7. Let \( \mathrm{TA} = \{ \varphi : \mathfrak{N} \vDash \varphi \} \) be the theory of \( \mathfrak{N} \) . TA has an enumerable non-standard model.
Proof. Expand \( {\mathcal{L}}_{A} \) by a new constant symbol \( c \) and consider the set of sentences\n\n\[ \Gamma = \mathbf{TA} \cup \{ c \neq \overline{0}, c \neq \overline{1}, c \neq \overline{2},\ldots \}\]\n\nAny model \( {\mathfrak{M}}^{c} \) of \( \Gamma \) would contain an element \( x = {c}^{\mathfrak{M}} \) which is non-standard, since \( x \neq {\operatorname{Val}}^{\mathfrak{M}}\left( \bar{n}\right) \) for all \( n \in \mathbb{N} \) . Also, obviously, \( {\mathfrak{M}}^{c} \vDash \mathbf{TA} \), since \( \mathbf{TA} \subseteq \Gamma \) . If we turn \( {\mathfrak{M}}^{c} \) into a structure \( \mathfrak{M} \) for \( {\mathcal{L}}_{A} \) simply by forgetting about \( c \), its domain still contains the non-standard \( x \), and also \( \mathfrak{M} \vDash \mathbf{TA} \) . The latter is guaranteed since \( c \) does not occur in TA. So, it suffices to show that \( \Gamma \) has a model.\n\nWe use the compactness theorem to show that \( \Gamma \) has a model. If every finite subset of \( \Gamma \) is satisfiable, so is \( \Gamma \) . Consider any finite subset \( {\Gamma }_{0} \subseteq \Gamma \) . \( {\Gamma }_{0} \) includes some sentences of TA and some of the form \( c \neq \bar{n} \), but only finitely many. Suppose \( k \) is the largest number so that \( c \neq \bar{k} \in {\Gamma }_{0} \) . Define \( {\mathfrak{N}}_{k} \) by expanding \( \mathfrak{N} \) to include the interpretation \( {c}^{{\mathfrak{N}}_{k}} = k + 1.{\mathfrak{N}}_{k} \vDash {\Gamma }_{0} \) : if \( \varphi \in \mathbf{TA} \) , \( {\mathfrak{N}}_{k} \vDash \varphi \) since \( {\mathfrak{N}}_{k} \) is just like \( \mathfrak{N} \) in all respects except \( c \), and \( c \) does not occur in \( \varphi \) . And \( {\mathfrak{N}}_{k} \vDash c \neq \bar{n} \), since \( n \leq k \), and \( {\operatorname{Val}}^{{\mathfrak{N}}_{k}}\left( c\right) = k + 1 \) . Thus, every finite subset of \( \Gamma \) is satisfiable.
Yes
Example 24.8. Consider the structure \( \mathfrak{K} \) with domain \( \left| \mathfrak{K}\right| = \mathbb{N} \cup \{ a\} \) and interpretations\n\n\[{\mathrm{o}}^{\mathfrak{K}} = 0\]\n\n\[{\prime }^{\mathfrak{K}}\left( x\right) = \left\{ \begin{array}{ll} x + 1 & \text{ if }x \in \mathbb{N} \\ a & \text{ if }x = a \end{array}\right.\]\n\n\[{ + }^{\mathfrak{K}}\left( {x, y}\right) = \left\{ \begin{array}{ll} x + y & \text{ if }x, y \in \mathbb{N} \\ a & \text{ otherwise } \end{array}\right.\]\n\n\[{ \times }^{\mathfrak{K}}\left( {x, y}\right) = \left\{ \begin{array}{ll} {xy} & \text{ if }x, y \in \mathbb{N} \\ 0 & \text{ if }x = 0\text{ or }y = 0 \\ a & \text{ otherwise } \end{array}\right.\]\n\n\[{ < }^{\mathfrak{K}} = \{ \langle x, y\rangle : x, y \in \mathbb{N}\text{ and }x < y\} \cup \{ \langle x, a\rangle : x \in \left| \mathfrak{K}\right| \}\]\n\nTo show that \( \mathfrak{K} \vDash \mathbf{Q} \) we have to verify that all axioms of \( \mathbf{Q} \) are true in \( \mathfrak{K} \).
\( \mathfrak{K} \vDash \forall x\forall y\left( {{x}^{\prime } = {y}^{\prime } \rightarrow x = y}\right) \) since \( * \) is injective. \( \mathfrak{K} \vDash \forall x \) o \( \neq {x}^{\prime } \) since 0 is not a \( * \) -successor in \( \mathfrak{K} \) . \( \mathfrak{K} \vDash \forall x\left( {x = 0\vee \exists {yx} = {y}^{\prime }}\right) \) since for every \( n > 0 \) , \( n = {\left( n - 1\right) }^{ * } \), and \( a = {a}^{ * } \) .\n\n\( \mathfrak{K} \vDash \forall x\left( {x + \mathrm{o}}\right) = x \) since \( n \oplus 0 = n + 0 = n \), and \( a \oplus 0 = a \) by definition of \( \oplus \) . \( \mathfrak{K} \vDash \forall x\forall y\left( {x + {y}^{\prime }}\right) = {\left( x + y\right) }^{\prime } \) is a bit trickier. If \( n, m \) are both standard, we have:\n\n\[ \left( {n \oplus {m}^{ * }}\right) = \left( {n + \left( {m + 1}\right) }\right) = \left( {n + m}\right) + 1 = {\left( n \oplus m\right) }^{ * } \]\n\nsince \( \oplus \) and \( {}^{ * } \) agree with + and / on standard numbers. Now suppose \( x \in \left| \mathfrak{K}\right| \) . Then\n\n\[ \left( {x \oplus {a}^{ * }}\right) = \left( {x \oplus a}\right) = a = {a}^{ * } = {\left( x \oplus a\right) }^{ * } \]\n\nThe remaining case is if \( y \in \left| \mathfrak{K}\right| \) but \( x = a \) . Here we also have to distinguish cases according to whether \( y = n \) is standard or \( y = b \) :\n\n\[ \left( {a \oplus {n}^{ * }}\right) = \left( {a \oplus \left( {n + 1}\right) }\right) = a = {a}^{ * } = {\left( a \oplus n\right) }^{ * } \]\n\n\[ \left( {a \oplus {a}^{ * }}\right) = \left( {a \oplus a}\right) = a = {a}^{ * } = {\left( a \oplus a\right) }^{ * } \]
Yes
Consider the structure \( \mathfrak{L} \) with domain \( \left| \mathfrak{L}\right| = \mathbb{N} \cup \{ a, b\} \) and interpretations \( {\prime }^{\mathfrak{L}} = * ,{ + }^{\mathfrak{L}} = \oplus \) given by\n\n<table><thead><tr><th>\( x \)</th><th>\( {x}^{ * } \)</th><th>\( x \oplus y \)</th><th>\( m \)</th><th>a</th><th>\( b \)</th></tr></thead><tr><td>n</td><td>\( n + 1 \)</td><td>\( n \)</td><td>\( n + m \)</td><td>b</td><td>a</td></tr><tr><td>a</td><td>a</td><td>a</td><td>a</td><td>\( b \)</td><td>a</td></tr><tr><td>b</td><td>\( b \)</td><td>\( b \)</td><td>\( b \)</td><td>\( b \)</td><td>a</td></tr></table>
Since \( * \) is injective,0 is not in its range, and every \( x \in \left| \mathfrak{L}\right| \) other than 0 is, axioms \( {Q}_{1} - {Q}_{3} \) are true in \( \mathfrak{L} \) . For any \( x, x \oplus 0 = x \), so \( {Q}_{4} \) is true as well. For \( {Q}_{5} \), consider \( x \oplus {y}^{ * } \) and \( {\left( x \oplus y\right) }^{ * } \) . They are equal if \( x \) and \( y \) are both standard, since then \( * \) and \( \oplus \) agree with \( \prime \) and + . If \( x \) is non-standard, and \( y \) is standard, we have \( x \oplus {y}^{ * } = x = {x}^{ * } = {\left( x \oplus y\right) }^{ * } \) . If \( x \) and \( y \) are both non-standard, we have four cases:\n\n\[ a \oplus {a}^{ * } = b = {b}^{ * } = {\left( a \oplus a\right) }^{ * } \]\n\n\[ b \oplus {b}^{ * } = a = {a}^{ * } = {\left( b \oplus b\right) }^{ * } \]\n\n\[ b \oplus {a}^{ * } = b = {b}^{ * } = {\left( b \oplus y\right) }^{ * } \]\n\n\[ a \oplus {b}^{ * } = a = {a}^{ * } = {\left( a \oplus b\right) }^{ * } \]\n\nIf \( x \) is standard, but \( y \) is non-standard, we have\n\n\[ n \oplus {a}^{ * } = n \oplus a = b = {b}^{ * } = {\left( n \oplus a\right) }^{ * } \]\n\n\[ n \oplus {b}^{ * } = n \oplus b = a = {a}^{ * } = {\left( n \oplus b\right) }^{ * } \]\n\nSo, \( \mathfrak{L} \vDash {Q}_{5} \) . However, \( a \oplus 0 \neq 0 \oplus a \), so \( \mathfrak{L} \mathrel{\text{\vDash \not{} }} x\forall x\forall y\left( {x + y}\right) = \left( {y + x}\right) \) .
Yes
Proposition 24.10. In \( \mathfrak{M} \) , \( \otimes \) is a linear strict order, i.e., it satisfies:\n\n1. Not \( x \otimes x \) for any \( x \in \left| \mathfrak{M}\right| \) .\n\n2. If \( x \otimes y \) and \( y \otimes z \) then \( x \otimes z \) .\n\n3. For any \( x \neq y, x \otimes y \) or \( y \otimes x \)
Proof. PA proves:\n\n1. \( \forall x\neg x < x \)\n\n2. \( \forall x\forall y\forall z\left( {\left( {x < y \land y < z}\right) \rightarrow x < z}\right) \)\n\n3. \( \forall x\forall y\left( {\left( {x < y \vee y < x}\right) \vee x = y}\right) \)
Yes
Proposition 24.12. All standard elements of \( \mathfrak{M} \) are less than (according to \( \otimes \) ) all non-standard elements.
Proof. We’ll use \( n \) as short for \( {\operatorname{Val}}^{\mathfrak{M}}\left( \bar{n}\right) \), a standard element of \( \mathfrak{M} \). Already \( \mathbf{Q} \) proves that, for any \( n \in \mathbb{N},\forall x\left( {x < {\bar{n}}^{\prime } \rightarrow \left( {x = \overline{0} \vee x = \overline{1} \vee \cdots \vee x = \bar{n}}\right) }\right) \). There are no elements that are \( \otimes \mathbf{z} \). So if \( n \) is standard and \( x \) is non-standard, we cannot have \( x \otimes n \). By definition, a non-standard element is one that isn’t \( {\operatorname{Val}}^{\mathfrak{M}}\left( \bar{n}\right) \) for any \( n \in \mathbb{N} \), so \( x \neq n \) as well. Since \( \otimes \) is a linear order, we must have \( n \otimes x \).
Yes
Proposition 24.13. Every nonstandard element \( x \) of \( \left| \mathfrak{M}\right| \) is an element of the subset \[ \ldots {}^{* * * }x{ \ominus }^{* * }x{ \ominus }^{ * }x \ominus x \ominus {x}^{ * } \ominus {x}^{* * } \ominus {x}^{* * * } \ominus \ldots \] We call this subset the block of \( x \) and write it as \( \left\lbrack x\right\rbrack \) . It has no least and no greatest element. It can be characterized as the set of those \( y \in \left| \mathfrak{M}\right| \) such that, for some standard \( n, x \oplus n = y \) or \( y \oplus n = x \) .
Proof. Clearly, such a set \( \left\lbrack x\right\rbrack \) always exists since every element \( y \) of \( \left| \mathfrak{M}\right| \) has a unique successor \( {y}^{ * } \) and unique predecessor \( {}^{ * }y \) . For successive elements \( y \) , \( {y}^{ * } \) we have \( y \otimes {y}^{ * } \) and \( {y}^{ * } \) is the \( \otimes \) -least element of \( \left| \mathfrak{M}\right| \) such that \( y \) is \( \otimes \) -less than it. Since always \( {}^{ * }y \otimes y \) and \( y \otimes {y}^{ * },\left\lbrack x\right\rbrack \) has no least or greatest element. If \( y \in \left\lbrack x\right\rbrack \) then \( x \in \left\lbrack y\right\rbrack \), for then either \( {y}^{*\ldots * } = x \) or \( {x}^{*\ldots * } = y \) . If \( {y}^{*\ldots * } = x \) (with \( n \) \( \left. {{ * }^{\prime }\mathrm{s}}\right) \), then \( y \oplus n = x \) and conversely, since \( \mathbf{{PA}} \vdash \forall x{x}^{\prime \ldots \prime } = \left( {x + \bar{n}}\right) \) (if \( n \) is the number of \( {I}^{\prime }\mathrm{s} \) ).
Yes
Proposition 24.14. If \( \left\lbrack x\right\rbrack \neq \left\lbrack y\right\rbrack \) and \( x \otimes y \), then for any \( u \in \left\lbrack x\right\rbrack \) and any \( v \in \left\lbrack y\right\rbrack \) , \( u \otimes v \) .
Proof. Note that \( \mathbf{{PA}} \vdash \forall x\forall y\left( {x < y \rightarrow \left( {{x}^{\prime } < y \vee {x}^{\prime } = y}\right) }\right) \) . Thus, if \( u \otimes v \), we also have \( u \oplus {n}^{ * } \otimes v \) for any \( n \) if \( \left\lbrack u\right\rbrack \neq \left\lbrack v\right\rbrack \) .\n\nAny \( u \in \left\lbrack x\right\rbrack \) is \( \otimes y : x \otimes y \) by assumption. If \( u \otimes x, u \otimes y \) by transitivity. And if \( x \otimes u \) but \( u \in \left\lbrack x\right\rbrack \), we have \( u = x \oplus {n}^{ * } \) for some \( n \), and so \( u \otimes y \) by the fact just proved.\n\nNow suppose that \( v \in \left\lbrack y\right\rbrack \) is \( \otimes y \), i.e., \( v \oplus {m}^{ * } = y \) for some standard \( m \) . This rules out \( v \otimes x \), otherwise \( y = v \oplus {m}^{ * } \otimes x \) . Clearly also, \( x \neq v \), otherwise \( x \oplus {m}^{ * } = v \oplus {m}^{ * } = y \) and we would have \( \left\lbrack x\right\rbrack = \left\lbrack y\right\rbrack \) . So, \( x \otimes v \) . But then also \( x \oplus {n}^{ * } \otimes v \) for any \( n \) . Hence, if \( x \otimes u \) and \( u \in \left\lbrack x\right\rbrack \), we have \( u \otimes v \) . If \( u \otimes x \) then \( u \otimes v \) by transitivity.\n\nLastly, if \( y \otimes v, u \otimes v \) since, as we’ve shown, \( u \otimes y \) and \( y \otimes v \) .
Yes