Q
stringlengths 4
3.96k
| A
stringlengths 1
3k
| Result
stringclasses 4
values |
|---|---|---|
For instance, consider the assumption \( \mathbb{T}\varphi \land \neg \varphi \) . Here is the (open) tableau consisting of just that assumption:\n\n\[ \text{1.}\mathbb{T}\varphi \land \neg \varphi \;\text{Assumption} \]
|
We obtain a new tableau from it by applying the \( \land \mathbb{T} \) rule to the assumption. That rule allows us to add two new lines to the tableau, \( \mathbb{T}\varphi \) and \( \mathbb{T}\neg \varphi \) :\n\n\[ \begin{matrix} \text{ 1. } & \mathbb{T}\varphi \land \neg \varphi & \text{ Assumption } \\ \text{ 2. } & \mathbb{T}\varphi & \land \mathbb{T}1 \\ \text{ 3. } & \mathbb{T}\neg \varphi & \land \mathbb{T}1 \end{matrix} \]
|
Yes
|
Let’s find a closed tableau for the sentence \( \left( {\varphi \land \psi }\right) \rightarrow \varphi \) .
|
We begin by writing the corresponding assumption at the top of the tableau.\n\n\[ \text{1.}\mathbb{F}\left( {\varphi \land \psi }\right) \rightarrow \varphi \;\text{Assumption} \]\n\nThere is only one assumption, so only one signed formula to which we can apply a rule. (For every signed formula, there is always at most one rule that can be applied: it's the rule for the corresponding sign and main operator of the sentence.) In this case, this means, we must apply \( \rightarrow \mathbb{F} \) .\n\n\[ \text{1.}\mathbb{F}\left( {\varphi \land \psi }\right) \rightarrow \varphi \checkmark \;\text{Assumption} \]\n\n\[ \text{2.}\mathbb{T}\varphi \land \psi \; \rightarrow \mathbb{F}1 \]\n\nTo keep track of which signed formulas we have applied their corresponding rules to, we write a checkmark next to the sentence. However, only write a checkmark if the rule has been applied to all open branches. Once a signed formula has had the corresponding rule applied in every open branch, we will not have to return to it and apply the rule again. In this case, there is only one branch, so the rule only has to be applied once. (Note that checkmarks are only a convenience for constructing tableaux and are not officially part of the syntax of tableaux.)\n\nThere is one new signed formula to which we can apply a rule: the \( \mathbb{T}\varphi \land \psi \) on line 3. Applying the \( \land \mathbb{T} \) rule results in:\n\n<table><tr><td>1.</td><td>\( \mathbb{F}\left( {\varphi \land \psi }\right) \rightarrow \varphi \checkmark \)</td><td>Assumption</td></tr><tr><td>2.</td><td>\( \mathbb{T}\varphi \land \psi \checkmark \)</td><td>\( \rightarrow \mathbb{F}1 \)</td></tr><tr><td>3.</td><td>\( \mathbb{F}\varphi \)</td><td>\( \rightarrow \mathbb{F}1 \)</td></tr><tr><td>4.</td><td>\( \mathbb{T}\varphi \)</td><td>AT2</td></tr><tr><td>5.</td><td>\( \mathbb{T}\psi \)</td><td>AT2</td></tr><tr><td></td><td>\( \otimes \)</td><td></td></tr></table>\n\nSince the branch now contains both \( \mathbb{T}\varphi \) (on line 4) and \( \mathbb{F}\varphi \) (on line 3), the branch is closed. Since it is the only branch, the tableau is closed. We have found a closed tableau for \( \left( {\varphi \land \psi }\right) \rightarrow \varphi \) .
|
Yes
|
Now let’s find a closed tableau for \( \left( {\neg \varphi \vee \psi }\right) \rightarrow \left( {\varphi \rightarrow \psi }\right) \).
|
We begin with the corresponding assumption:\n\n\[ \text{1.}\;\mathbb{F}\left( {\neg \varphi \vee \psi }\right) \rightarrow \left( {\varphi \rightarrow \psi }\right) \;\text{Assumption} \]\n\nThe one signed formula in this tableau has main operator \( \rightarrow \) and sign \( \mathbb{F} \), so we apply the \( \rightarrow \mathbb{F} \) rule to it to obtain:\n\n\[ \begin{matrix} \text{ 1. } & \mathbb{F}\left( {\neg \varphi \vee \psi }\right) \rightarrow \left( {\varphi \rightarrow \psi }\right) \checkmark & \text{ Assumption } \\ \text{ 2. } & \mathbb{T}\neg \varphi \vee \psi & \rightarrow \mathbb{F}1 \\ \text{ 3. } & \mathbb{F}\left( {\varphi \rightarrow \psi }\right) & \rightarrow \mathbb{F}1 \end{matrix} \]\n\nWe now have a choice as to whether to apply \( \vee \mathbb{T} \) to line 2 or \( \rightarrow \mathbb{F} \) to line 3 . It actually doesn't matter which order we pick, as long as each signed formula has its corresponding rule applied in every branch. So let's pick the first one. The \( \vee \mathbb{T} \) rule allows the tableau to branch, and the two conclusions of the rule will be the new signed formulas added to the two new branches. This results in:\n\n\n\nWe have not applied the \( \rightarrow \mathbb{F} \) rule to line 3 yet: let’s do that now. To save time, we apply it to both branches. Recall that we write a checkmark next to a signed formula only if we have applied the corresponding rule in every open branch. So it's a good idea to apply a rule at the end of every branch that contains the signed formula the rule applies to. That way we won't have to return to that signed formula lower down in the various branches.\n\n\n\nThe right branch is now closed. On the left branch, we can still apply the \( \neg \mathbb{T} \) rule to line 4. This results in \( \mathbb{F}\varphi \) and closes the left branch:\n\n
|
Yes
|
Example 11.6. We can give tableaux for any number of signed formulas as assumptions. Often it is also necessary to apply more than one rule that allows branching; and in general a tableau can have any number of branches. For instance, consider a tableau for \( \{ \mathbb{T}\varphi \vee \left( {\psi \land \chi }\right) ,\mathbb{F}\left( {\varphi \vee \psi }\right) \land \left( {\varphi \vee \chi }\right) \} \) .
|
We start by applying the \( \vee \mathbb{T} \) to the first assumption:\n\n\n\nNow we can apply the \( \land \mathbb{F} \) rule to line 2 . We do this on both branches simultaneously, and can therefore check off line 2: \n\nNow we can apply \( \vee \mathbb{F} \) to all the branches containing \( \varphi \vee \psi \) :\n\n\n\nThe leftmost branch is now closed. Let’s now apply \( \vee \mathbb{F} \) to \( \varphi \vee \chi \) :\n\n\n\nNote that we moved the result of applying \( \vee \mathbb{F} \) a second time below for clarity. In this instance it would not have been needed, since the justifications would have been the same.\n\nTwo branches remain open, and \( \mathbb{T}\psi \land \chi \) on line 3 remains unchecked. We apply \( \land \mathbb{T} \) to it to obtain a closed tableau:\n\n
|
Yes
|
Proposition 11.10 (Reflexivity). If \( \varphi \in \Gamma \), then \( \Gamma \vdash \varphi \) .
|
Proof. If \( \varphi \in \Gamma ,\{ \varphi \} \) is a finite subset of \( \Gamma \) and the tableau\n\n1. \( \mathbb{F}\varphi \; \) Assumption\n\n2. \( \mathbb{T}\varphi \; \) Assumption\n\n\( \otimes \)\n\nis closed.
|
No
|
Proposition 11.11 (Monotony). If \( \Gamma \subseteq \Delta \) and \( \Gamma \vdash \varphi \), then \( \Delta \vdash \varphi \) .
|
Proof. Any finite subset of \( \Gamma \) is also a finite subset of \( \Delta \) .
|
No
|
Proposition 11.12 (Transitivity). If \( \Gamma \vdash \varphi \) and \( \{ \varphi \} \cup \Delta \vdash \psi \), then \( \Gamma \cup \Delta \vdash \psi \) .
|
Proof. If \( \{ \varphi \} \cup \Delta \vdash \psi \), then there is a finite subset \( {\Delta }_{0} = \left\{ {{\chi }_{1},\ldots ,{\chi }_{n}}\right\} \subseteq \Delta \) such that\n\n\[ \left\{ {\mathbb{F}\psi ,\mathbb{T}\varphi ,\mathbb{T}{\chi }_{1},\ldots ,\mathbb{T}{\chi }_{n}}\right\} \]\n\nhas a closed tableau. If \( \Gamma \vdash \varphi \) then there are \( {\theta }_{1},\ldots ,{\theta }_{m} \) such that\n\n\[ \left\{ {\mathbb{F}\varphi ,\mathbb{T}{\theta }_{1},\ldots ,\mathbb{T}{\theta }_{m}}\right\} \]\n\nhas a closed tableau.\n\nNow consider the tableau with assumptions\n\n\[ \mathbb{F}\psi ,\mathbb{T}{\chi }_{1},\ldots ,\mathbb{T}{\chi }_{n},\mathbb{T}{\theta }_{1},\ldots ,\mathbb{T}{\theta }_{m} \]\n\nApply the Cut rule on \( \varphi \) . This generates two branches, one has \( \mathbb{T}\varphi \) in it, the other \( \mathbb{F}\varphi \) . Thus, on the one branch, all of\n\n\[ \left\{ {\mathbb{F}\psi ,\mathbb{T}\varphi ,\mathbb{T}{\chi }_{1},\ldots ,\mathbb{T}{\chi }_{n}}\right\} \]\n\nare available. Since there is a closed tableau for these assumptions, we can attach it to that branch; every branch through \( \mathbb{T}{\varphi }_{1} \) closes. On the other branch, all of\n\n\[ \left\{ {\mathbb{F}\varphi ,\mathbb{T}{\theta }_{1},\ldots ,\mathbb{T}{\theta }_{m}}\right\} \]\n\nare available, so we can also complete the other side to obtain a closed tableau. This shows \( \Gamma \cup \Delta \vdash \psi \) .
|
Yes
|
Proposition 11.13. \( \Gamma \) is inconsistent iff \( \Gamma \vdash \varphi \) for every sentence \( \varphi \) .
|
Proof. Exercise.
|
No
|
Proposition 11.14 (Compactness). 1. If \( \Gamma \vdash \varphi \) then there is a finite subset \( {\Gamma }_{0} \subseteq \Gamma \) such that \( {\Gamma }_{0} \vdash \varphi \).
|
1. If \( \Gamma \vdash \varphi \), then there is a finite subset \( {\Gamma }_{0} = \left\{ {{\psi }_{1},\ldots ,{\psi }_{n}}\right\} \) and a closed tableau for\n\n\[ \mathbb{F}\varphi ,\mathbb{T}{\psi }_{1},\cdots \mathbb{T}{\psi }_{n} \]\n\nThis tableau also shows \( {\Gamma }_{0} \vdash \varphi \).
|
Yes
|
Proposition 11.15. If \( \Gamma \vdash \varphi \) and \( \Gamma \cup \{ \varphi \} \) is inconsistent, then \( \Gamma \) is inconsistent.
|
Proof. There are finite \( {\Gamma }_{0} = \left\{ {{\psi }_{1},\ldots ,{\psi }_{n}}\right\} \) and \( {\Gamma }_{1} = \left\{ {{\chi }_{1},\ldots ,{\chi }_{n}}\right\} \subseteq \Gamma \) such that\n\n\[ \left\{ {\mathbb{F}\varphi ,\mathbb{T}{\psi }_{1},\ldots ,\mathbb{T}{\psi }_{n}}\right\} \]\n\n\[ \left\{ {\mathbb{T}\neg \varphi ,\mathbb{T}{\chi }_{1},\ldots ,\mathbb{T}{\chi }_{m}}\right\} \]\n\nhave closed tableaux. Using the Cut rule on \( \varphi \) we can combine these into a single closed tableau that shows \( {\Gamma }_{0} \cup {\Gamma }_{1} \) is inconsistent. Since \( {\Gamma }_{0} \subseteq \Gamma \) and \( {\Gamma }_{1} \subseteq \Gamma ,{\Gamma }_{0} \cup {\Gamma }_{1} \subseteq \Gamma \), hence \( \Gamma \) is inconsistent.
|
Yes
|
Proposition 11.16. \( \Gamma \vdash \varphi \) iff \( \Gamma \cup \{ \neg \varphi \} \) is inconsistent.
|
Proof. First suppose \( \Gamma \vdash \varphi \), i.e., there is a closed tableau for\n\n\[ \left\{ {\mathbb{F}\varphi ,\mathbb{T}{\psi }_{1},\ldots ,\mathbb{T}{\psi }_{n}}\right\} \]\n\nUsing the \( \neg \mathbb{T} \) rule, this can be turned into a closed tableau for\n\n\[ \left\{ {\mathbb{T}\neg \varphi ,\mathbb{T}{\psi }_{1},\ldots ,\mathbb{T}{\psi }_{n}}\right\} \]\n\nOn the other hand, if there is a closed tableau for the latter, we can turn it into a closed tableau of the former by removing every formula that results from \( \neg \mathbb{T} \) applied to the first assumption \( \mathbb{T}\neg \varphi \) as well as that assumption, and adding the assumption \( \mathbb{F}\varphi \) . For if a branch was closed before because it contained the conclusion of \( \neg \mathbb{T} \) applied to \( \mathbb{T}\neg \varphi \), i.e., \( \mathbb{F}\varphi \), the corresponding branch in the new tableau is also closed. If a branch in the old tableau was closed because it contained the assumption \( \mathbb{T}\neg \varphi \) as well as \( \mathbb{F}\neg \varphi \) we can turn it into a closed branch by applying \( \neg \mathbb{F} \) to \( \mathbb{F}\neg \varphi \) to obtain \( \mathbb{T}\varphi \) . This closes the branch since we added \( \mathbb{F}\varphi \) as an assumption.
|
Yes
|
Proposition 11.17. If \( \Gamma \vdash \varphi \) and \( \neg \varphi \in \Gamma \), then \( \Gamma \) is inconsistent.
|
Proof. Suppose \( \Gamma \vdash \varphi \) and \( \neg \varphi \in \Gamma \) . Then there are \( {\psi }_{1},\ldots ,{\psi }_{n} \in \Gamma \) such that\n\n\[ \left\{ {\mathbb{F}\varphi ,\mathbb{T}{\psi }_{1},\ldots ,\mathbb{T}{\psi }_{n}}\right\} \]\n\nhas a closed tableau. Replace the assumption \( \mathbb{F}\varphi \) by \( \mathbb{T}\neg \varphi \), and insert the conclusion of \( \neg \mathbb{T} \) applied to \( \mathbb{F}\varphi \) after the assumptions. Any sentence in the tableau justified by appeal to line 1 in the old tableau is now justified by appeal to line \( n + 1 \) . So if the old tableau was closed, the new one is. It shows that \( \Gamma \) is inconsistent, since all assumptions are in \( \Gamma \) .
|
Yes
|
Proposition 11.18. If \( \Gamma \cup \{ \varphi \} \) and \( \Gamma \cup \{ \neg \varphi \} \) are both inconsistent, then \( \Gamma \) is inconsistent.
|
Proof. If there are \( {\psi }_{1},\ldots ,{\psi }_{n} \in \Gamma \) and \( {\chi }_{1},\ldots ,{\chi }_{m} \in \Gamma \) such that\n\n\[ \left\{ {\mathbb{T}\varphi ,\mathbb{T}{\psi }_{1},\ldots ,\mathbb{T}{\psi }_{n}}\right\} \]\n\n\[ \left\{ {\mathbb{T}\neg \varphi ,\mathbb{T}{\chi }_{1},\ldots ,\mathbb{T}{\chi }_{m}}\right\} \]\n\nboth have closed tableaux, we can construct a tableau that shows that \( \Gamma \) is inconsistent by using as assumptions \( \mathbb{T}{\psi }_{1},\ldots ,\mathbb{T}{\psi }_{n} \) together with \( \mathbb{T}{\chi }_{1},\ldots \) , \( \mathbb{T}{\chi }_{m} \), followed by an application of the Cut rule, yielding two branches, one starting with \( \mathbb{T}\varphi \), the other with \( \mathbb{F}\varphi \) . Add on the part below the assumptions of the first tableau on the left side. Here, every rule application is still correct, and every branch closes. On the right side, add the part below the assumptions of the seond tableau, with the results of any applications of \( \neg \mathbb{T} \) to \( \mathbb{T}\neg \varphi \) removed.\n\nFor if a branch was closed before because it contained the conclusion of \( \neg \mathbb{T} \) applied to \( \mathbb{T}\neg \varphi \), i.e., \( \mathbb{F}\varphi \), as well as \( \mathbb{F}\varphi \), the corresponding branch in the new tableau is also closed. If a branch in the old tableau was closed because it contained the assumption \( \mathbb{T}\neg \varphi \) as well as \( \mathbb{F}\neg \varphi \) we can turn it into a closed branch by applying \( \neg \mathbb{F} \) to \( \mathbb{F}\neg \varphi \) to obtain \( \mathbb{T}\varphi \) .
|
Yes
|
Proposition 11.19. 1. Both \( \varphi \land \psi \vdash \varphi \) and \( \varphi \land \psi \vdash \psi \) .
|
Proof. 1. Both \( \{ \mathbb{F}\varphi ,\mathbb{T}\varphi \land \psi \} \) and \( \{ \mathbb{F}\psi ,\mathbb{T}\varphi \land \psi \} \) have closed tableaux\n\n\[ \text{1.}\mathbb{F}\varphi \;\text{Assumption} \]\n\n2. \( \mathbb{T}\varphi \land \psi \; \) Assumption\n\n3. \( \mathbb{T}\varphi \; \land \mathbb{T}2 \)
|
No
|
Proposition 11.20. 1. \( \varphi \vee \psi ,\neg \varphi ,\neg \psi \) is inconsistent.
|
Proof. 1. We give a closed tableau of \( \{ \mathbb{T}\varphi \vee \psi ,\mathbb{T}\neg \varphi ,\mathbb{T}\neg \psi \} \) :\n\n
|
Yes
|
Proposition 11.21. 1. \( \varphi ,\varphi \rightarrow \psi \vdash \psi \) .
|
Proof. 1. \( \{ \mathbb{F}\psi ,\mathbb{T}\varphi \rightarrow \psi ,\mathbb{T}\varphi \} \) has a closed tableau:\n\n
|
Yes
|
Corollary 11.25. If \( \Gamma \vdash \varphi \) then \( \Gamma \vDash \varphi \) .
|
Proof. If \( \Gamma \vdash \varphi \) then for some \( {\psi }_{1},\ldots ,{\psi }_{n} \in \Gamma ,\left\{ {\mathbb{F}\varphi ,\mathbb{T}{\psi }_{1},\ldots ,\mathbb{T}{\psi }_{n}}\right\} \) has a closed tableau. By Theorem 11.23, every valuation \( \mathfrak{v} \) either makes some \( {\psi }_{i} \) false or makes \( \varphi \) true. Hence, if \( \mathfrak{v} \vDash \Gamma \) then also \( \mathfrak{v} \vDash \varphi \) .
|
Yes
|
Corollary 11.26. If \( \Gamma \) is satisfiable, then it is consistent.
|
Proof. We prove the contrapositive. Suppose that \( \Gamma \) is not consistent. Then there are \( {\psi }_{1},\ldots ,{\psi }_{n} \in \Gamma \) and a closed tableau for \( \{ \mathbb{T}\psi ,\ldots ,\mathbb{T}\psi \} \) . By Theorem 11.23, there is no \( \mathfrak{v} \) such that \( \mathfrak{v} \vDash {\psi }_{i} \) for all \( i = 1,\ldots, n \) . But then \( \Gamma \) is not satisfiable.
|
Yes
|
Suppose we want to prove \( \left( {\neg \theta \vee \alpha }\right) \rightarrow \left( {\theta \rightarrow \alpha }\right) \)
|
Clearly, this is not an instance of any of our axioms, so we have to use the MP rule to derive it. Our only rule is MP, which given \( \varphi \) and \( \varphi \rightarrow \psi \) allows us to justify \( \psi \) . One strategy would be to use eq. (12.6) with \( \varphi \) being \( \neg \theta ,\psi \) being \( \alpha \), and \( \chi \) being \( \theta \rightarrow \alpha \), i.e., the instance\n\n\[ \left( {\neg \theta \rightarrow \left( {\theta \rightarrow \alpha }\right) }\right) \rightarrow \left( {\left( {\alpha \rightarrow \left( {\theta \rightarrow \alpha }\right) }\right) \rightarrow \left( {\left( {\neg \theta \vee \alpha }\right) \rightarrow \left( {\theta \rightarrow \alpha }\right) }\right) }\right) .\n\]\n\nWhy? Two applications of MP yield the last part, which is what we want. And we easily see that \( \neg \theta \rightarrow \left( {\theta \rightarrow \alpha }\right) \) is an instance of eq. (12.10), and \( \alpha \rightarrow \left( {\theta \rightarrow \alpha }\right) \) is an instance of eq. (12.7). So our derivation is:\n\n1. \( \neg \theta \rightarrow \left( {\theta \rightarrow \alpha }\right) \)\neq. (12.7)\n\n2. \( \left( {\neg \theta \rightarrow \left( {\theta \rightarrow \alpha }\right) }\right) \rightarrow \)\n\n\[ \left( {\left( {\alpha \rightarrow \left( {\theta \rightarrow \alpha }\right) }\right) \rightarrow \left( {\left( {\neg \theta \vee \alpha }\right) \rightarrow \left( {\theta \rightarrow \alpha }\right) }\right) }\right) \;\text{ eq. (12.6) }\n\]\n\n3. \( \;(\left( {\alpha \rightarrow \left( {\theta \rightarrow \alpha }\right) }\right) \rightarrow \left( {\left( {\neg \theta \vee \alpha }\right) \rightarrow \left( {\theta \rightarrow \alpha }\right) }\right) \)\n\n\( 1,2,\mathrm{{MP}} \)\n\n4. \( \alpha \rightarrow \left( {\theta \rightarrow \alpha }\right) \; \) eq. (12.7)\n\n\( \left( {\neg \theta \vee \alpha }\right) \rightarrow \left( {\theta \rightarrow \alpha }\right) \)\n\n\( 3,4,\mathrm{{MP}} \)
|
Yes
|
Let’s try to find a derivation of \( \theta \rightarrow \theta \).
|
1. \( \theta \rightarrow \left( {\left( {\theta \rightarrow \theta }\right) \rightarrow \theta }\right) \; \) eq. (12.7)\n\n2. \( \left( {\theta \rightarrow \left( {\left( {\theta \rightarrow \theta }\right) \rightarrow \theta }\right) }\right) \rightarrow \)\n\n\[ \left( {\left( {\theta \rightarrow \left( {\theta \rightarrow \theta }\right) }\right) \rightarrow \left( {\theta \rightarrow \theta }\right) }\right) \;\text{eq. (12.8)} \]\n\n3. \( \left( {\theta \rightarrow \left( {\theta \rightarrow \theta }\right) }\right) \rightarrow \left( {\theta \rightarrow \theta }\right) \;1,2,\mathrm{{MP}} \)\n\n4. \( \theta \rightarrow \left( {\theta \rightarrow \theta }\right) \; \) eq. (12.7)\n\n5. \( \theta \rightarrow \theta \;3,4,\mathrm{{MP}} \)
|
Yes
|
Sometimes we want to show that there is a derivation of some formula from some other formulas \( \Gamma \) . For instance, let’s show that we can derive \( \varphi \rightarrow \chi \) from \( \Gamma = \{ \varphi \rightarrow \psi ,\psi \rightarrow \chi \} \) .
|
1. \( \varphi \rightarrow \psi \; \) HYP\n2. \( \psi \rightarrow \chi \; \) HYP\n3. \( \left( {\psi \rightarrow \chi }\right) \rightarrow \left( {\varphi \rightarrow \left( {\psi \rightarrow \chi }\right) }\right) \) eq. (12.7)\n4. \( \varphi \rightarrow \left( {\psi \rightarrow \chi }\right) \;2,3,\mathrm{{MP}} \)\n5. \( \left( {\varphi \rightarrow \left( {\psi \rightarrow \chi }\right) }\right) \rightarrow \left( {\left( {\varphi \rightarrow \psi }\right) \rightarrow \left( {\varphi \rightarrow \chi }\right) }\right) \; \) eq. (12.8)\n6. \( \;\left( {\left( {\varphi \rightarrow \psi }\right) \rightarrow \left( {\varphi \rightarrow \chi }\right) }\right) \;4,5,\mathrm{{MP}} \)\n7. \( \varphi \rightarrow \chi \;1,6,\mathrm{{MP}} \)
|
Yes
|
Proposition 12.10. If \( \Gamma \vdash \varphi \rightarrow \psi \) and \( \Gamma \vdash \psi \rightarrow \chi \), then \( \Gamma \vdash \varphi \rightarrow \chi \)
|
Proof. Suppose \( \Gamma \vdash \varphi \rightarrow \psi \) and \( \Gamma \vdash \psi \rightarrow \chi \) . Then there is a derivation of \( \varphi \rightarrow \psi \) from \( \Gamma \) ; and a derivation of \( \psi \rightarrow \chi \) from \( \Gamma \) as well. Combine these into a single derivation by concatenating them. Now add lines 3-7 of the derivation in the preceding example. This is a derivation of \( \varphi \rightarrow \chi \) -which is the last line of the new derivation-from \( \Gamma \) . Note that the justifications of lines 4 and 7 remain valid if the reference to line number 2 is replaced by reference to the last line of the derivation of \( \varphi \rightarrow \psi \), and reference to line number 1 by reference to the last line of the derivation of \( B \rightarrow \chi \) .
|
Yes
|
Proposition 12.14 (Reflexivity). If \( \varphi \in \Gamma \), then \( \Gamma \vdash \varphi \) .
|
Proof. The formula \( \varphi \) by itself is a derivation of \( \varphi \) from \( \Gamma \) .
|
Yes
|
Proposition 12.15 (Monotony). If \( \Gamma \subseteq \Delta \) and \( \Gamma \vdash \varphi \), then \( \Delta \vdash \varphi \) .
|
Proof. Any derivation of \( \varphi \) from \( \Gamma \) is also a derivation of \( \varphi \) from \( \Delta \) .
|
Yes
|
Proposition 12.16 (Transitivity). If \( \Gamma \vdash \varphi \) and \( \{ \varphi \} \cup \Delta \vdash \psi \), then \( \Gamma \cup \Delta \vdash \psi \) .
|
Proof. Suppose \( \{ \varphi \} \cup \Delta \vdash \psi \) . Then there is a derivation \( {\psi }_{1},\ldots ,{\psi }_{l} = \psi \) from \( \{ \varphi \} \cup \Delta \) . Some of the steps in that derivation will be correct because of a rule which refers to a prior line \( {\psi }_{i} = \varphi \) . By hypothesis, there is a derivation of \( \varphi \) from \( \Gamma \), i.e., a derivation \( {\varphi }_{1},\ldots ,{\varphi }_{k} = \varphi \) where every \( {\varphi }_{i} \) is an axiom, an element of \( \Gamma \), or correct by a rule of inference. Now consider the sequence\n\n\[ \n{\varphi }_{1},\ldots ,{\varphi }_{k} = \varphi ,{\psi }_{1},\ldots ,{\psi }_{l} = \psi .\n\]\n\nThis is a correct derivation of \( \psi \) from \( \Gamma \cup \Delta \) since every \( {B}_{i} = \varphi \) is now justified by the same rule which justifies \( {\varphi }_{k} = \varphi \) .
|
Yes
|
Proposition 12.17. \( \Gamma \) is inconsistent iff \( \Gamma \vdash \varphi \) for every \( \varphi \) .
|
Proof. Exercise.
|
No
|
Proposition 12.18 (Compactness). 1. If \( \Gamma \vdash \varphi \) then there is a finite subset \( {\Gamma }_{0} \subseteq \Gamma \) such that \( {\Gamma }_{0} \vdash \varphi \) .
|
1. If \( \Gamma \vdash \varphi \), then there is a finite sequence of formulas \( {\varphi }_{1},\ldots ,{\varphi }_{n} \) so that \( \varphi \equiv {\varphi }_{n} \) and each \( {\varphi }_{i} \) is either a logical axiom, an element of \( \Gamma \) or follows from previous formulas by modus ponens. Take \( {\Gamma }_{0} \) to be those \( {\varphi }_{i} \) which are in \( \Gamma \) . Then the derivation is likewise a derivation from \( {\Gamma }_{0} \) , and so \( {\Gamma }_{0} \vdash \varphi \) .
|
Yes
|
Proposition 12.19. If \( \Gamma \vdash \varphi \) and \( \Gamma \vdash \varphi \rightarrow \psi \), then \( \Gamma \vdash \psi \) .
|
Proof. We have that \( \{ \varphi ,\varphi \rightarrow \psi \} \vdash \psi \) :\n\n1. \( \varphi \; \) Hyp.\n\n2. \( \varphi \rightarrow \psi \; \) Hyp.\n\n3. \( \psi \;1,2,\mathrm{{MP}} \)\n\nBy Proposition 12.16, \( \Gamma \vdash \psi \) .
|
Yes
|
Theorem 12.20 (Deduction Theorem). \( \Gamma \cup \{ \varphi \} \vdash \psi \) if and only if \( \Gamma \vdash \varphi \rightarrow \psi \) .
|
Proof. The \
|
No
|
Proposition 12.22. If \( \Gamma \vdash \varphi \) and \( \Gamma \cup \{ \varphi \} \) is inconsistent, then \( \Gamma \) is inconsistent.
|
Proof. If \( \Gamma \cup \{ \varphi \} \) is inconsistent, then \( \Gamma \cup \{ \varphi \} \vdash \bot \) . By Proposition 12.14, \( \Gamma \vdash \psi \) for every \( \psi \in \Gamma \) . Since also \( \Gamma \vdash \varphi \) by hypothesis, \( \Gamma \vdash \psi \) for every \( \psi \in \Gamma \cup \{ \varphi \} \) . By Proposition 12.16, \( \Gamma \vdash \bot \), i.e., \( \Gamma \) is inconsistent.
|
Yes
|
Proposition 12.23. \( \Gamma \vdash \varphi \) iff \( \Gamma \cup \{ \neg \varphi \} \) is inconsistent.
|
Proof. First suppose \( \Gamma \vdash \varphi \) . Then \( \Gamma \cup \{ \neg \varphi \} \vdash \varphi \) by Proposition 12.15. \( \Gamma \cup \) \( \{ \neg \varphi \} \vdash \neg \varphi \) by Proposition 12.14. We also have \( \vdash \neg \varphi \rightarrow \left( {\varphi \rightarrow \bot }\right) \) by eq. (12.10). So by two applications of Proposition 12.19, we have \( \Gamma \cup \{ \neg \varphi \} \vdash \bot \) . Now assume \( \Gamma \cup \{ \neg \varphi \} \) is inconsistent, i.e., \( \Gamma \cup \{ \neg \varphi \} \vdash \bot \) . By the deduction theorem, \( \Gamma \vdash \neg \varphi \rightarrow \bot .\Gamma \vdash \left( {\neg \varphi \rightarrow \bot }\right) \rightarrow \neg \neg \varphi \) by eq. (12.13), so \( \Gamma \vdash \neg \neg \varphi \) by Proposition 12.19. Since \( \Gamma \vdash \neg \neg \varphi \rightarrow \varphi \) (eq. (12.14)), we have \( \Gamma \vdash \varphi \) by Proposition 12.19 again.
|
Yes
|
Proposition 12.24. If \( \Gamma \vdash \varphi \) and \( \neg \varphi \in \Gamma \), then \( \Gamma \) is inconsistent.
|
Proof. \( \Gamma \vdash \neg \varphi \rightarrow \left( {\varphi \rightarrow \bot }\right) \) by eq. (12.10). \( \Gamma \vdash \bot \) by two applications of Proposition 12.19.
|
Yes
|
Proposition 12.25. If \( \Gamma \cup \{ \varphi \} \) and \( \Gamma \cup \{ \neg \varphi \} \) are both inconsistent, then \( \Gamma \) is inconsistent.
|
Proof. Exercise.
|
No
|
Proposition 12.26. 1. Both \( \varphi \land \psi \vdash \varphi \) and \( \varphi \land \psi \vdash \psi \)
|
Proof. 1. From eq. (12.1) and eq. (12.1) by modus ponens.
|
No
|
Proposition 12.27. 1. \( \varphi \vee \psi ,\neg \varphi ,\neg \psi \) is inconsistent.
|
Proof. 1. From eq. (12.9) we get \( \vdash \neg \varphi \rightarrow \left( {\varphi \rightarrow \bot }\right) \) and \( \vdash \neg \varphi \rightarrow \left( {\varphi \rightarrow \bot }\right) \) . So by the deduction theorem, we have \( \{ \neg \varphi \} \vdash \varphi \rightarrow \bot \) and \( \{ \neg \psi \} \vdash \psi \rightarrow \) \( \bot \) . From eq. (12.6) we get \( \{ \neg \varphi ,\neg \psi \} \vdash \left( {\varphi \vee \psi }\right) \rightarrow \bot \) . By the deduction theorem, \( \{ \varphi \vee \psi ,\neg \varphi ,\neg \psi \} \vdash \bot \) .
|
Yes
|
Proposition 12.28. 1. \( \varphi ,\varphi \rightarrow \psi \vdash \psi \) .
|
Proof. 1. We can derive:\n\n1. \( \varphi \;\mathrm{{HYP}} \)\n\n2. \( \varphi \rightarrow \psi \;\mathrm{{HYP}} \)\n\n3. \( \psi \;1,2,\mathrm{{MP}} \)
|
Yes
|
Proposition 12.29. If \( \varphi \) is an axiom, then \( \mathfrak{v} \vDash \varphi \) for each valuation \( \mathfrak{v} \) .
|
Proof. Do truth tables for each axiom to verify that they are tautologies.
|
No
|
Theorem 12.30 (Soundness). If \( \Gamma \vdash \varphi \) then \( \Gamma \vDash \varphi \) .
|
Proof. By induction on the length of the derivation of \( \varphi \) from \( \Gamma \) . If there are no steps justified by inferences, then all formulas in the derivation are either instances of axioms or are in \( \Gamma \) . By the previous proposition, all the axioms are tautologies, and hence if \( \varphi \) is an axiom then \( \Gamma \vDash \varphi \) . If \( \varphi \in \Gamma \), then trivially \( \Gamma \vDash \varphi \) .\n\nIf the last step of the derivation of \( \varphi \) is justified by modus ponens, then there are formulas \( \psi \) and \( \psi \rightarrow \varphi \) in the derivation, and the induction hypothesis applies to the part of the derivation ending in those formulas (since they contain at least one fewer steps justified by an inference). So, by induction hypothesis, \( \Gamma \vDash \psi \) and \( \Gamma \vDash \psi \rightarrow \varphi \) . Then \( \Gamma \vDash \varphi \) by Theorem 7.17.
|
Yes
|
Corollary 12.32. If \( \Gamma \) is satisfiable, then it is consistent.
|
Proof. We prove the contrapositive. Suppose that \( \Gamma \) is not consistent. Then \( \Gamma \vdash \) \( \bot \), i.e., there is a derivation of \( \bot \) from \( \Gamma \) . By Theorem 12.30, any valuation \( \mathfrak{v} \) that satisfies \( \Gamma \) must satisfy \( \bot \) . Since \( \mathfrak{v} \mathrel{\text{\vDash \not{} }} \bot \) for every valuation \( \mathfrak{v} \), no \( \mathfrak{v} \) can satisfy \( \Gamma \), i.e., \( \Gamma \) is not satisfiable.
|
Yes
|
1. If \( \Gamma \vdash \varphi \), then \( \varphi \in \Gamma \) .
|
Suppose that \( \Gamma \vdash \varphi \) . Suppose to the contrary that \( \varphi \notin \Gamma \) . Since \( \Gamma \) is complete, \( \neg \varphi \in \Gamma \) . By Propositions 10.17,11.17,9.19 and 12.24, \( \Gamma \) is inconsistent. This contradicts the assumption that \( \Gamma \) is consistent. Hence, it cannot be the case that \( \varphi \notin \Gamma \), so \( \varphi \in \Gamma \) .
|
Yes
|
Lemma 13.3 (Lindenbaum’s Lemma). Every consistent set \( \Gamma \) in a language \( \mathcal{L} \) can be extended to a complete and consistent set \( {\Gamma }^{ * } \) .
|
Proof. Let \( \Gamma \) be consistent. Let \( {\varphi }_{0},{\varphi }_{1},\ldots \) be an enumeration of all the sentences of \( \mathcal{L} \) . Define \( {\Gamma }_{0} = \Gamma \), and\n\n\[ \n{\Gamma }_{n + 1} = \left\{ \begin{array}{ll} {\Gamma }_{n} \cup \left\{ {\varphi }_{n}\right\} & \text{ if }{\Gamma }_{n} \cup \left\{ {\varphi }_{n}\right\} \text{ is consistent; } \\ {\Gamma }_{n} \cup \left\{ {\neg {\varphi }_{n}}\right\} & \text{ otherwise. } \end{array}\right. \n\]\n\nLet \( {\Gamma }^{ * } = \mathop{\bigcup }\limits_{{n \geq 0}}{\Gamma }_{n} \) .\n\nEach \( {\Gamma }_{n} \) is consistent: \( {\Gamma }_{0} \) is consistent by definition. If \( {\Gamma }_{n + 1} = {\Gamma }_{n} \cup \left\{ {\varphi }_{n}\right\} \) , this is because the latter is consistent. If it isn’t, \( {\Gamma }_{n + 1} = {\Gamma }_{n} \cup \left\{ {\neg {\varphi }_{n}}\right\} \) . We have to verify that \( {\Gamma }_{n} \cup \left\{ {\neg {\varphi }_{n}}\right\} \) is consistent. Suppose it’s not. Then both \( {\Gamma }_{n} \cup \left\{ {\varphi }_{n}\right\} \) and \( {\Gamma }_{n} \cup \left\{ {\neg {\varphi }_{n}}\right\} \) are inconsistent. This means that \( {\Gamma }_{n} \) would be inconsistent by Propositions 10.17, 11.17, 9.19 and 12.24, contrary to the induction hypothesis.\n\nFor every \( n \) and every \( i < n,{\Gamma }_{i} \subseteq {\Gamma }_{n} \) . This follows by a simple induction on \( n \) . For \( n = 0 \), there are no \( i < 0 \), so the claim holds automatically. For the inductive step, suppose it is true for \( n \) . We have \( {\Gamma }_{n + 1} = {\Gamma }_{n} \cup \left\{ {\varphi }_{n}\right\} \) or \( = {\Gamma }_{n} \cup \left\{ {\neg {\varphi }_{n}}\right\} \) by construction. So \( {\Gamma }_{n} \subseteq {\Gamma }_{n + 1} \) . If \( i < n \), then \( {\Gamma }_{i} \subseteq {\Gamma }_{n} \) by inductive hypothesis, and so \( \subseteq {\Gamma }_{n + 1} \) by transitivity of \( \subseteq \) .\n\nFrom this it follows that every finite subset of \( {\Gamma }^{ * } \) is a subset of \( {\Gamma }_{n} \) for some \( n \), since each \( \psi \in {\Gamma }^{ * } \) not already in \( {\Gamma }_{0} \) is added at some stage \( i \) . If \( n \) is the last one of these, then all \( \psi \) in the finite subset are in \( {\Gamma }_{n} \) . So, every finite subset of \( {\Gamma }^{ * } \) is consistent. By Propositions 10.14,11.14,9.16 and 12.18, \( {\Gamma }^{ * } \) is consistent.\n\nEvery sentence of \( \operatorname{Frm}\left( \mathcal{L}\right) \) appears on the list used to define \( {\Gamma }^{ * } \) . If \( {\varphi }_{n} \notin {\Gamma }^{ * } \) , then that is because \( {\Gamma }_{n} \cup \left\{ {\varphi }_{n}\right\} \) was inconsistent. But then \( \neg {\varphi }_{n} \in {\Gamma }^{ * } \), so \( {\Gamma }^{ * } \) is complete.
|
Yes
|
Lemma 13.5 (Truth Lemma). \( \mathfrak{v}\left( {\Gamma }^{ * }\right) \vDash \varphi \) iff \( \varphi \in {\Gamma }^{ * } \) .
|
Proof. We prove both directions simultaneously, and by induction on \( \varphi \) .\n\n1. \( \varphi \equiv \bot : \mathfrak{v}\left( {\Gamma }^{ * }\right) \nvDash \bot \) by definition of satisfaction. On the other hand, \( \bot \notin {\Gamma }^{ * } \) since \( {\Gamma }^{ * } \) is consistent.\n\n2. \( \varphi \equiv p : \mathfrak{v}\left( {\Gamma }^{ * }\right) \vDash p \) iff \( \mathfrak{v}\left( {\Gamma }^{ * }\right) \left( p\right) = \mathbb{T} \) (by the definition of satisfaction) iff \( p \in {\Gamma }^{ * } \) (by the construction of \( \mathfrak{v}\left( {\Gamma }^{ * }\right) \) ).\n\n3. \( \varphi \equiv \neg \psi : \mathfrak{v}\left( {\Gamma }^{ * }\right) \vDash \varphi \) iff \( \mathfrak{v}\left( {\Gamma }^{ * }\right) \vDash \psi \) (by definition of satisfaction). By induction hypothesis, \( \mathfrak{v}\left( {\Gamma }^{ * }\right) \vDash \psi \) iff \( \psi \notin {\Gamma }^{ * } \) . Since \( {\Gamma }^{ * } \) is consistent and complete, \( \psi \notin {\Gamma }^{ * } \) iff \( \neg \psi \in {\Gamma }^{ * } \) .\n\n4. \( \varphi \equiv \psi \land \chi \) : exercise.\n\n5. \( \varphi \equiv \psi \vee \chi : \mathfrak{v}\left( {\Gamma }^{ * }\right) \vDash \varphi \) iff \( \mathfrak{v}\left( {\Gamma }^{ * }\right) \vDash \psi \) or \( \mathfrak{v}\left( {\Gamma }^{ * }\right) \vDash \chi \) (by definition of satisfaction) iff \( \psi \in {\Gamma }^{ * } \) or \( \chi \in {\Gamma }^{ * } \) (by induction hypothesis). This is the case iff \( \left( {\psi \vee \chi }\right) \in {\Gamma }^{ * } \) (by Proposition 13.2(3)).\n\n6. \( \varphi \equiv \psi \rightarrow \chi \) : exercise.
|
No
|
Theorem 13.6 (Completeness Theorem). Let \( \Gamma \) be a set of sentences. If \( \Gamma \) is consistent, it is satisfiable.
|
Proof. Suppose \( \Gamma \) is consistent. By Lemma 13.3, there is a \( {\Gamma }^{ * } \supseteq \Gamma \) which is consistent and complete. By Lemma 13.5, \( \mathfrak{v}\left( {\Gamma }^{ * }\right) \vDash \varphi \) iff \( \varphi \in {\Gamma }^{ * } \) . From this it follows in particular that for all \( \varphi \in \Gamma ,\mathfrak{v}\left( {\Gamma }^{ * }\right) \vDash \varphi \), so \( \Gamma \) is satisfiable.
|
Yes
|
Corollary 13.7 (Completeness Theorem, Second Version). For all \( \Gamma \) and sentences \( \varphi \) : if \( \Gamma \vDash \varphi \) then \( \Gamma \vdash \varphi \) .
|
Proof. Note that the \( {\Gamma }^{\prime }\mathrm{s} \) in Corollary 13.7 and Theorem 13.6 are universally quantified. To make sure we do not confuse ourselves, let us restate Theorem 13.6 using a different variable: for any set of sentences \( \Delta \), if \( \Delta \) is consistent, it is satisfiable. By contraposition, if \( \Delta \) is not satisfiable, then \( \Delta \) is inconsistent. We will use this to prove the corollary.\n\nSuppose that \( \Gamma \vDash \varphi \) . Then \( \Gamma \cup \{ \neg \varphi \} \) is unsatisfiable by Proposition 7.16. Taking \( \Gamma \cup \{ \neg \varphi \} \) as our \( \Delta \), the previous version of Theorem 13.6 gives us that \( \Gamma \cup \{ \neg \varphi \} \) is inconsistent. By Propositions 10.16,11.16,9.18 and 12.23, \( \Gamma \vdash \varphi .▱ \)
|
No
|
Theorem 13.9 (Compactness Theorem). The following hold for any sentences \( \Gamma \) and \( \varphi \) :\n\n1. \( \Gamma \vDash \varphi \) iff there is a finite \( {\Gamma }_{0} \subseteq \Gamma \) such that \( {\Gamma }_{0} \vDash \varphi \) .\n\n2. \( \Gamma \) is satisfiable if and only if it is finitely satisfiable.
|
Proof. We prove (2). If \( \Gamma \) is satisfiable, then there is a valuation \( \mathfrak{v} \) such that \( \mathfrak{v} \vDash \varphi \) for all \( \varphi \in \Gamma \) . Of course, this \( \mathfrak{v} \) also satisfies every finite subset of \( \Gamma \), so \( \Gamma \) is finitely satisfiable.\n\nNow suppose that \( \Gamma \) is finitely satisfiable. Then every finite subset \( {\Gamma }_{0} \subseteq \) \( \Gamma \) is satisfiable. By soundness (Corollaries 10.24,11.26,9.28 and 12.32), every finite subset is consistent. Then \( \Gamma \) itself must be consistent by Propositions 10.14,11.14,9.16 and 12.18. By completeness (Theorem 13.6), since \( \Gamma \) is consistent, it is satisfiable.
|
Yes
|
Theorem 13.12 (Compactness). \( \Gamma \) is satisfiable if and only if it is finitely satisfiable.
|
Proof. If \( \Gamma \) is satisfiable, then there is a valuation \( \mathfrak{v} \) such that \( \mathfrak{v} \vDash \varphi \) for all \( \varphi \in \Gamma \) . Of course, this \( \mathfrak{v} \) also satisfies every finite subset of \( \Gamma \), so \( \Gamma \) is finitely satisfiable.\n\nNow suppose that \( \Gamma \) is finitely satisfiable. By Lemma 13.11, \( \Gamma \) can be extended to a complete and finitely satisfiable set \( {\Gamma }^{ * } \) . Construct the valuation \( \mathfrak{v}\left( {\Gamma }^{ * }\right) \) as in Definition 13.4. The proof of the Truth Lemma (Lemma 13.5) goes through if we replace references to Proposition 13.2.
|
No
|
Lemma 14.8. The number of left and right parentheses in a formula \( \varphi \) are equal.
|
Proof. We prove this by induction on the way \( \varphi \) is constructed. This requires two things: (a) We have to prove first that all atomic formulas have the property in question (the induction basis). (b) Then we have to prove that when we construct new formulas out of given formulas, the new formulas have the property provided the old ones do.\n\nLet \( l\left( \varphi \right) \) be the number of left parentheses, and \( r\left( \varphi \right) \) the number of right parentheses in \( \varphi \), and \( l\left( t\right) \) and \( r\left( t\right) \) similarly the number of left and right parentheses in a term \( t \) . We leave the proof that for any term \( t, l\left( t\right) = r\left( t\right) \) as an exercise.\n\n1. \( \varphi \equiv \bot : \varphi \) has 0 left and 0 right parentheses.\n\n2. \( \varphi \equiv R\left( {{t}_{1},\ldots ,{t}_{n}}\right) : \;l\left( \varphi \right) = 1 + l\left( {t}_{1}\right) + \cdots + l\left( {t}_{n}\right) = 1 + r\left( {t}_{1}\right) + \cdots + \) \( r\left( {t}_{n}\right) = r\left( \varphi \right) \) . Here we make use of the fact, left as an exercise, that \( l\left( t\right) = r\left( t\right) \) for any term \( t \) .\n\n3. \( \varphi \equiv {t}_{1} = {t}_{2} : l\left( \varphi \right) = l\left( {t}_{1}\right) + l\left( {t}_{2}\right) = r\left( {t}_{1}\right) + r\left( {t}_{2}\right) = r\left( \varphi \right) \) .\n\n4. \( \varphi \equiv \neg \psi \) : By induction hypothesis, \( l\left( \psi \right) = r\left( \psi \right) \) . Thus \( l\left( \varphi \right) = l\left( \psi \right) = \) \( r\left( \psi \right) = r\left( \varphi \right) \)\n\n5. \( \varphi \equiv \left( {\psi * \chi }\right) \) : By induction hypothesis, \( l\left( \psi \right) = r\left( \psi \right) \) and \( l\left( \chi \right) = r\left( \chi \right) \) . Thus \( l\left( \varphi \right) = 1 + l\left( \psi \right) + l\left( \chi \right) = 1 + r\left( \psi \right) + r\left( \chi \right) = r\left( \varphi \right) \) .\n\n6. \( \varphi \equiv \forall {x\psi } \) : By induction hypothesis, \( l\left( \psi \right) = r\left( \psi \right) \) . Thus, \( l\left( \varphi \right) = l\left( \psi \right) = \) \( r\left( \psi \right) = r\left( \varphi \right) \) .\n\n7. \( \varphi \equiv \exists {x\psi } \) : Similarly.
|
No
|
Lemma 14.10. If \( \varphi \) is a formula, and \( \psi \) is a proper prefix of \( \varphi \), then \( \psi \) is not a formula.
|
Proof. Exercise.
|
No
|
Proposition 14.11. If \( \varphi \) is an atomic formula, then it satisfes one, and only one of the following conditions.\n\n1. \( \varphi \equiv \bot \) .\n\n2. \( \varphi \equiv R\left( {{t}_{1},\ldots ,{t}_{n}}\right) \) where \( R \) is an \( n \) -place predicate symbol, \( {t}_{1},\ldots ,{t}_{n} \) are terms, and each of \( R,{t}_{1},\ldots ,{t}_{n} \) is uniquely determined.\n\n3. \( \varphi \equiv {t}_{1} = {t}_{2} \) where \( {t}_{1} \) and \( {t}_{2} \) are uniquely determined terms.
|
Proof. Exercise.
|
No
|
Proposition 14.12 (Unique Readability). Every formula satisfies one, and only one of the following conditions.\n\n1. \( \varphi \) is atomic.\n\n2. \( \varphi \) is of the form \( \neg \psi \) .\n\n3. \( \varphi \) is of the form \( \left( {\psi \land \chi }\right) \) .\n\n4. \( \varphi \) is of the form \( \left( {\psi \vee \chi }\right) \) .\n\n5. \( \varphi \) is of the form \( \left( {\psi \rightarrow \chi }\right) \) .\n\n6. \( \varphi \) is of the form \( \forall {x\psi } \) .\n\n7. \( \varphi \) is of the form \( \exists {x\psi } \) .\n\nMoreover, in each case \( \psi \), or \( \psi \) and \( \chi \), are uniquely determined. This means that, e.g., there are no different pairs \( \psi ,\chi \) and \( {\psi }^{\prime },{\chi }^{\prime } \) so that \( \varphi \) is both of the form \( \left( {\psi \rightarrow \chi }\right) \) and \( \left( {{\psi }^{\prime } \rightarrow {\chi }^{\prime }}\right) \)
|
Proof. The formation rules require that if a formula is not atomic, it must start with an opening parenthesis (, -, or with a quantifier. On the other hand, every formula that start with one of the following symbols must be atomic: a predicate symbol, a function symbol, a constant symbol, \( \bot \) .\n\nSo we really only have to show that if \( \varphi \) is of the form \( \left( {\psi * \chi }\right) \) and also of the form \( \left( {{\psi }^{\prime }{ * }^{\prime }{\chi }^{\prime }}\right) \), then \( \psi \equiv {\psi }^{\prime },\chi \equiv {\chi }^{\prime } \), and \( * = { * }^{\prime } \).\n\nSo suppose both \( \varphi \equiv \left( {\psi * \chi }\right) \) and \( \varphi \equiv \left( {{\psi }^{\prime }{ * }^{\prime }{\chi }^{\prime }}\right) \) . Then either \( \psi \equiv {\psi }^{\prime } \) or not. If it is, clearly \( * = { * }^{\prime } \) and \( \chi \equiv {\chi }^{\prime } \), since they then are substrings of \( \varphi \) that begin in the same place and are of the same length. The other case is \( \psi ≢ {\psi }^{\prime } \) . Since \( \psi \) and \( {\psi }^{\prime } \) are both substrings of \( \varphi \) that begin at the same place, one must be a proper prefix of the other. But this is impossible by Lemma 14.10.
|
Yes
|
Consider the following formula:\n\n\[ \exists {v}_{0}\underset{\psi }{\underbrace{{A}_{0}^{2}\left( {{v}_{0},{v}_{1}}\right) }} \]
|
\( \psi \) represents the scope of \( \exists {v}_{0} \) . The quantifier binds the occurence of \( {v}_{0} \) in \( \psi \) , but does not bind the occurence of \( {v}_{1} \) . So \( {v}_{1} \) is a free variable in this case.
|
Yes
|
A structure \( \mathfrak{M} \) for the language of arithmetic consists of a set, an element of \( \left| \mathfrak{M}\right| ,{\mathrm{o}}^{\mathfrak{M}} \), as interpretation of the constant symbol \( \mathrm{o} \), a one-place function \( {\prime }^{\mathfrak{M}} : \left| \mathfrak{M}\right| \rightarrow \left| \mathfrak{M}\right| \), two two-place functions \( + {}^{\mathfrak{M}} \) and \( \times {}^{\mathfrak{M}} \), both \( {\left| \mathfrak{M}\right| }^{2} \rightarrow \) \( \left| \mathfrak{M}\right| \), and a two-place relation \( { < }^{\mathfrak{M}} \subseteq {\left| \mathfrak{M}\right| }^{2} \).
|
An obvious example of such a structure is the following:\n\n1. \( \left| \mathfrak{N}\right| = \mathbb{N} \)\n\n2. \( {\mathrm{o}}^{\mathfrak{N}} = 0 \)\n\n3. \( {\prime }^{\mathfrak{N}}\left( n\right) = n + 1 \) for all \( n \in \mathbb{N} \)\n\n4. \( { + }^{\mathfrak{N}}\left( {n, m}\right) = n + m \) for all \( n, m \in \mathbb{N} \)\n\n5. \( { \times }^{\mathfrak{N}}\left( {n, m}\right) = n \cdot m \) for all \( n, m \in \mathbb{N} \)\n\n\[ \text{6.}{ < }^{\mathfrak{N}} = \{ \langle n, m\rangle : n \in \mathbb{N}, m \in \mathbb{N}, n < m\} \]
|
Yes
|
Definition 14.30 (Covered structure). A structure is covered if every element of the domain is the value of some closed term.
|
Example 14.31. Let
|
No
|
Let \( \mathcal{L} \) be the language with constant symbols zero, one, two, \( \ldots \), the binary predicate symbol \( < \), and the binary function symbols + and \( \times \) . Then a structure \( \mathfrak{M} \) for \( \mathcal{L} \) is the one with domain \( \left| \mathfrak{M}\right| = \{ 0,1,2,\ldots \} \) and assignments zero \( {}^{\mathfrak{M}} = 0 \), one \( {}^{\mathfrak{M}} = 1 \), two \( {}^{\mathfrak{M}} = 2 \), and so forth. For the binary relation symbol \( < \), the set \( { < }^{\mathfrak{M}} \) is the set of all pairs \( \left\langle {{c}_{1},{c}_{2}}\right\rangle \in {\left| \mathfrak{M}\right| }^{2} \) such that \( {c}_{1} \) is less than \( {c}_{2} \) : for example, \( \langle 1,3\rangle \in { < }^{\mathfrak{M}} \) but \( \langle 2,2\rangle \notin { < }^{\mathfrak{M}} \) . For the binary function symbol \( + \), define \( { + }^{\mathfrak{M}} \) in the usual way-for example, \( { + }^{\mathfrak{M}}\left( {2,3}\right) \) maps to 5, and similarly for the binary function symbol \( \times \) . Hence, the value of four is just 4, and the value of \( \times \left( {\text{two,} + \left( \text{three, zero}\right) }\right) \) (or in infix notation, two \( \times \) (three \( + \) zero)) is
|
\[
{\operatorname{Val}}^{\mathfrak{M}}( \times \left( {\text{two,} + \left( \text{three, zero}\right) }\right) =
\]
\[
= { \times }^{\mathfrak{M}}\left( {{\mathrm{{Val}}}^{\mathfrak{M}}\left( \text{two}\right) ,{\mathrm{{Val}}}^{\mathfrak{M}}\left( {+\left( {\text{three},\text{zero}}\right) }\right) }\right)
\]
\[
= { \times }^{\mathfrak{M}}\left( {{\operatorname{Val}}^{\mathfrak{M}}\left( \text{ two }\right) ,{ + }^{\mathfrak{M}}\left( {{\operatorname{Val}}^{\mathfrak{M}}\left( \text{ three }\right) ,{\operatorname{Val}}^{\mathfrak{M}}\left( \text{ zero }\right) }\right) }\right)
\]
\[
= { \times }^{\mathfrak{M}}\left( {{\text{two}}^{\mathfrak{M}},{ + }^{\mathfrak{M}}\left( {{\text{three}}^{\mathfrak{M}},{\text{zero}}^{\mathfrak{M}}}\right) }\right)
\]
\[
= { \times }^{\mathfrak{M}}\left( {2,{ + }^{\mathfrak{M}}\left( {3,0}\right) }\right)
\]
\[
= { \times }^{\mathfrak{M}}\left( {2,3}\right)
\]
\[
= 6
\]
|
Yes
|
Example 14.36. Let \( \mathcal{L} = \{ a, b, f, R\} \) where \( a \) and \( b \) are constant symbols, \( f \) is a two-place function symbol, and \( R \) is a two-place predicate symbol. Consider the structure \( \mathfrak{M} \) defined by:\n\n1. \( \left| \mathfrak{M}\right| = \{ 1,2,3,4\} \)\n\n2. \( {a}^{\mathfrak{M}} = 1 \)\n\n3. \( {b}^{\mathfrak{M}} = 2 \)\n\n4. \( {f}^{\mathfrak{M}}\left( {x, y}\right) = x + y \) if \( x + y \leq 3 \) and \( = 3 \) otherwise.\n\n5. \( {R}^{\mathfrak{M}} = \{ \langle 1,1\rangle ,\langle 1,2\rangle ,\langle 2,3\rangle ,\langle 2,4\rangle \} \)\n\nThe function \( s\left( x\right) = 1 \) that assigns \( 1 \in \left| \mathfrak{M}\right| \) to every variable is a variable assignment for \( \mathfrak{M} \) . Then\n\n\[ \n{\operatorname{Val}}_{s}^{\mathfrak{M}}\left( {f\left( {a, b}\right) }\right) = {f}^{\mathfrak{M}}\left( {{\operatorname{Val}}_{s}^{\mathfrak{M}}\left( a\right) ,{\operatorname{Val}}_{s}^{\mathfrak{M}}\left( b\right) }\right) .\n\]
|
Since \( a \) and \( b \) are constant symbols, \( {\operatorname{Val}}_{s}^{\mathfrak{M}}\left( a\right) = {a}^{\mathfrak{M}} = 1 \) and \( {\operatorname{Val}}_{s}^{\mathfrak{M}}\left( b\right) = {b}^{\mathfrak{M}} = \) 2. So\n\n\[ \n{\operatorname{Val}}_{s}^{\mathfrak{M}}\left( {f\left( {a, b}\right) }\right) = {f}^{\mathfrak{M}}\left( {1,2}\right) = 1 + 2 = 3.\n\]
|
Yes
|
Proposition 14.37. If the variables in a term \( t \) are among \( {x}_{1},\ldots ,{x}_{n} \), and \( {s}_{1}\left( {x}_{i}\right) = \) \( {s}_{2}\left( {x}_{i}\right) \) for \( i = 1,\ldots, n \), then \( {\operatorname{Val}}_{{s}_{1}}^{\mathfrak{M}}\left( t\right) = {\operatorname{Val}}_{{s}_{2}}^{\mathfrak{M}}\left( t\right) \) .
|
Proof. By induction on the complexity of \( t \) . For the base case, \( t \) can be a constant symbol or one of the variables \( {x}_{1},\ldots ,{x}_{n} \) . If \( t = c \), then \( {\operatorname{Val}}_{{s}_{1}}^{\mathfrak{M}}\left( t\right) = {c}^{\mathfrak{M}} = \) \( {\operatorname{Val}}_{{s}_{2}}^{3n}\left( t\right) \) . If \( t = {x}_{i},{s}_{1}\left( {x}_{i}\right) = {s}_{2}\left( {x}_{i}\right) \) by the hypothesis of the proposition, and so \( {\operatorname{Val}}_{{s}_{1}}^{\widetilde{\mathfrak{M}}}\left( t\right) = {s}_{1}\left( {x}_{i}\right) = {s}_{2}\left( {x}_{i}\right) = {\operatorname{Val}}_{{s}_{2}}^{\mathfrak{M}}\left( t\right) . \n\nFor the inductive step, assume that \( t = f\left( {{t}_{1},\ldots ,{t}_{k}}\right) \) and that the claim holds for \( {t}_{1},\ldots ,{t}_{k} \) . Then\n\n\[ \n{\operatorname{Val}}_{{s}_{1}}^{\mathfrak{M}}\left( t\right) = {\operatorname{Val}}_{{s}_{1}}^{\mathfrak{M}}\left( {f\left( {{t}_{1},\ldots ,{t}_{k}}\right) }\right) = \n\]\n\n\[ \n= {f}^{\mathfrak{M}}\left( {{\operatorname{Val}}_{{s}_{1}}^{\mathfrak{M}}\left( {t}_{1}\right) ,\ldots ,{\operatorname{Val}}_{{s}_{1}}^{\mathfrak{M}}\left( {t}_{k}\right) }\right) \n\]\n\nFor \( j = 1,\ldots, k \), the variables of \( {t}_{j} \) are among \( {x}_{1},\ldots ,{x}_{n} \) . So by induction hypothesis, \( {\operatorname{Val}}_{{s}_{1}}^{\mathfrak{M}}\left( {t}_{j}\right) = {\operatorname{Val}}_{{s}_{2}}^{\mathfrak{M}}\left( {t}_{j}\right) \) . So,\n\n\[ \n{\operatorname{Val}}_{{s}_{1}}^{\mathfrak{M}}\left( t\right) = {\operatorname{Val}}_{{s}_{2}}^{\mathfrak{M}}\left( {f\left( {{t}_{1},\ldots ,{t}_{k}}\right) }\right) = \n\]\n\n\[ \n= {f}^{\mathfrak{M}}\left( {{\operatorname{Val}}_{{s}_{1}}^{\mathfrak{M}}\left( {t}_{1}\right) ,\ldots ,{\operatorname{Val}}_{{s}_{1}}^{\mathfrak{M}}\left( {t}_{k}\right) }\right) = \n\]\n\n\[ \n= {f}^{\mathfrak{M}}\left( {{\operatorname{Val}}_{{s}_{2}}^{\mathfrak{M}}\left( {t}_{1}\right) ,\ldots ,{\operatorname{Val}}_{{s}_{2}}^{\mathfrak{M}}\left( {t}_{k}\right) }\right) = \n\]\n\n\[ \n= {\operatorname{Val}}_{{s}_{2}}^{\mathfrak{M}}\left( {f\left( {{t}_{1},\ldots ,{t}_{k}}\right) }\right) = {\operatorname{Val}}_{{s}_{2}}^{\mathfrak{M}}\left( t\right) . \n\]
|
Yes
|
Corollary 14.39. If \( \varphi \) is a sentence and \( s \) a variable assignment, then \( \mathfrak{M}, s \vDash \varphi \) iff \( \mathfrak{M},{s}^{\prime } \vDash \varphi \) for every variable assignment \( {s}^{\prime } \) .
|
Proof. Let \( {s}^{\prime } \) be any variable assignment. Since \( \varphi \) is a sentence, it has no free variables, and so every variable assignment \( {s}^{\prime } \) trivially assigns the same things to all free variables of \( \varphi \) as does \( s \) . So the condition of Proposition 14.38 is satisfied, and we have \( \mathfrak{M}, s \vDash \varphi \) iff \( \mathfrak{M},{s}^{\prime } \vDash \varphi \) .
|
Yes
|
Proposition 14.41. Let \( \mathfrak{M} \) be a structure, \( \varphi \) be a sentence, and \( s \) a variable assignment. \( \mathfrak{M} \vDash \varphi \) iff \( \mathfrak{M}, s \vDash \varphi \) .
|
Proof. Exercise.
|
No
|
Proposition 14.42. Suppose \( \varphi \left( x\right) \) only contains \( x \) free, and \( \mathfrak{M} \) is a structure. Then:\n\n1. \( \mathfrak{M} \vDash \exists {x\varphi }\left( x\right) \) iff \( \mathfrak{M}, s \vDash \varphi \left( x\right) \) for at least one variable assignment \( s \) .\n\n2. \( \mathfrak{M} \vDash \forall {x\varphi }\left( x\right) \) iff \( \mathfrak{M}, s \vDash \varphi \left( x\right) \) for all variable assignments \( s \) .
|
Proof. Exercise.
|
No
|
Proposition 14.43 (Extensionality). Let \( \varphi \) be a formula, and \( {\mathfrak{M}}_{1} \) and \( {\mathfrak{M}}_{2} \) be structures with \( \left| {\mathfrak{M}}_{1}\right| = \left| {\mathfrak{M}}_{2}\right| \), and \( s \) a variable assignment on \( \left| {\mathfrak{M}}_{1}\right| = \left| {\mathfrak{M}}_{2}\right| \) . If \( {c}^{{\mathfrak{M}}_{1}} = \) \( {c}^{{\mathfrak{M}}_{2}},{R}^{{\mathfrak{M}}_{1}} = {R}^{{\mathfrak{M}}_{2}} \), and \( {f}^{{\mathfrak{M}}_{1}} = {f}^{{\mathfrak{M}}_{2}} \) for every constant symbol \( c \), relation symbol \( R \) , and function symbol \( f \) occurring in \( \varphi \), then \( {\mathfrak{M}}_{1}, s \vDash \varphi \) iff \( {\mathfrak{M}}_{2}, s \vDash \varphi \) .
|
Proof. First prove (by induction on \( t \) ) that for every term, \( {\operatorname{Val}}_{s}^{{\mathfrak{M}}_{1}}\left( t\right) = {\operatorname{Val}}_{s}^{{\mathfrak{M}}_{2}}\left( t\right) \) . Then prove the proposition by induction on \( \varphi \), making use of the claim just proved for the induction basis (where \( \varphi \) is atomic).
|
No
|
Corollary 14.44 (Extensionality for Sentences). Let \( \varphi \) be a sentence and \( {\mathfrak{M}}_{1},{\mathfrak{M}}_{2} \) as in Proposition 14.43. Then \( {\mathfrak{M}}_{1} \vDash \varphi \) iff \( {\mathfrak{M}}_{2} \vDash \varphi \) .
|
Proof. Follows from Proposition 14.43 by Corollary 14.39.
|
No
|
Proposition 14.45. Let \( \mathfrak{M} \) be a structure, \( t \) and \( {t}^{\prime } \) terms, and \( s \) a variable assignment. Let \( {s}^{\prime }{ \sim }_{x}s \) be the \( x \) -variant of \( s \) given by \( {s}^{\prime }\left( x\right) = {\operatorname{Val}}_{s}^{\mathfrak{M}}\left( {t}^{\prime }\right) \) . Then \( {\operatorname{Val}}_{s}^{\mathfrak{M}}\left( {t\left\lbrack {{t}^{\prime }/x}\right\rbrack }\right) = \) \( {\operatorname{Val}}_{{s}^{\prime }}^{\mathfrak{M}}\left( t\right) \)
|
Proof. By induction on \( t \) . \n\n1. If \( t \) is a constant, say, \( t \equiv c \), then \( t\left\lbrack {{t}^{\prime }/x}\right\rbrack = c \), and \( {\operatorname{Val}}_{s}^{\mathfrak{M}}\left( c\right) = {c}^{\mathfrak{M}} = \) \( {\operatorname{Val}}_{{s}^{\prime }}^{\mathfrak{M}}\left( c\right) \). \n\n2. If \( t \) is a variable other than \( x \), say, \( t \equiv y \), then \( t\left\lbrack {{t}^{\prime }/x}\right\rbrack = y \), and \( {\operatorname{Val}}_{s}^{\mathfrak{M}}\left( y\right) = \) \( {\operatorname{Val}}_{{s}^{\prime }}^{\mathfrak{M}}\left( y\right) \) since \( {s}^{\prime }{ \sim }_{x}s \). \n\n3. If \( t \equiv x \), then \( t\left\lbrack {{t}^{\prime }/x}\right\rbrack = {t}^{\prime } \). But \( {\operatorname{Val}}_{{s}^{\prime }}^{\mathfrak{M}}\left( x\right) = {\operatorname{Val}}_{s}^{\mathfrak{M}}\left( {t}^{\prime }\right) \) by definition of \( {s}^{\prime } \). \n\n4. If \( t \equiv f\left( {{t}_{1},\ldots ,{t}_{n}}\right) \) then we have: \n\n\[ \n{\operatorname{Val}}_{s}^{\mathfrak{M}}\left( {t\left\lbrack {{t}^{\prime }/x}\right\rbrack }\right) = \n\] \n\n\[ \n= {\operatorname{Val}}_{s}^{\mathfrak{M}}\left( {f\left( {{t}_{1}\left\lbrack {{t}^{\prime }/x}\right\rbrack ,\ldots ,{t}_{n}\left\lbrack {{t}^{\prime }/x}\right\rbrack }\right) }\right) \n\] \n\nby definition of \( t\left\lbrack {{t}^{\prime }/x}\right\rbrack \) \n\n\[ \n= {f}^{\mathfrak{M}}\left( {{\operatorname{Val}}_{s}^{\mathfrak{M}}\left( {{t}_{1}\left\lbrack {{t}^{\prime }/x}\right\rbrack }\right) ,\ldots ,{\operatorname{Val}}_{s}^{\mathfrak{M}}\left( {{t}_{n}\left\lbrack {{t}^{\prime }/x}\right\rbrack }\right) }\right) \n\] \n\n\[ \n\text{by definition of}{\operatorname{Val}}_{s}^{\mathfrak{M}}\left( {f\left( \ldots \right) }\right) \n\] \n\n\[ \n= {f}^{\mathfrak{M}}\left( {{\operatorname{Val}}_{{s}^{\prime }}^{\mathfrak{M}}\left( {t}_{1}\right) ,\ldots ,{\operatorname{Val}}_{{s}^{\prime }}^{\mathfrak{M}}\left( {t}_{n}\right) }\right) \n\] \n\nby induction hypothesis \n\n\[ \n= {\operatorname{Val}}_{{s}^{\prime }}^{\mathfrak{M}}\left( t\right) \text{by definition of}{\operatorname{Val}}_{{s}^{\prime }}^{\mathfrak{M}}\left( {f\left( \ldots \right) }\right) \n\]
|
Yes
|
Proposition 14.46. Let \( \mathfrak{M} \) be a structure, \( \varphi \) a formula, \( t \) a term, and \( s \) a variable assignment. Let \( {s}^{\prime }{ \sim }_{x}s \) be the \( x \) -variant of \( s \) given by \( {s}^{\prime }\left( x\right) = {\operatorname{Val}}_{s}^{3\Re }\left( t\right) \) . Then \( \mathfrak{M}, s \vDash \varphi \left\lbrack {t/x}\right\rbrack \) iff \( \mathfrak{M},{s}^{\prime } \vDash \varphi \) .
|
Proof. Exercise.
|
No
|
Proposition 14.50. A sentence \( \varphi \) is valid iff \( \Gamma \vDash \varphi \) for every set of sentences \( \Gamma \) .
|
Proof. For the forward direction, let \( \varphi \) be valid, and let \( \Gamma \) be a set of sentences. Let \( \mathfrak{M} \) be a structure so that \( \mathfrak{M} \vDash \Gamma \) . Since \( \varphi \) is valid, \( \mathfrak{M} \vDash \varphi \), hence \( \Gamma \vDash \varphi \) .\n\nFor the contrapositive of the reverse direction, let \( \varphi \) be invalid, so there is a structure \( \mathfrak{M} \) with \( \mathfrak{M} \mathrel{\text{\vDash \not{} }} \varphi \) . When \( \Gamma = \{ \top \} \), since \( \top \) is valid, \( \mathfrak{M} \vDash \Gamma \) . Hence, there is a structure \( \mathfrak{M} \) so that \( \mathfrak{M} \vDash \Gamma \) but \( \mathfrak{M} \vDash \varphi \), hence \( \Gamma \) does not entail \( \varphi \) .
|
Yes
|
Proposition 14.51. \( \Gamma \vDash \varphi \) iff \( \Gamma \cup \{ \neg \varphi \} \) is unsatisfiable.
|
Proof. For the forward direction, suppose \( \Gamma \vDash \varphi \) and suppose to the contrary that there is a structure \( \mathfrak{M} \) so that \( \mathfrak{M} \vDash \Gamma \cup \{ \neg \varphi \} \) . Since \( \mathfrak{M} \vDash \Gamma \) and \( \Gamma \vDash \varphi \) , \( \mathfrak{M} \vDash \varphi \) . Also, since \( \mathfrak{M} \vDash \Gamma \cup \{ \neg \varphi \} ,\mathfrak{M} \vDash \neg \varphi \), so we have both \( \mathfrak{M} \vDash \varphi \) and \( \mathfrak{M} \mathrel{\text{\vDash \not{} }} \varphi \), a contradiction. Hence, there can be no such structure \( \mathfrak{M} \), so \( \Gamma \cup \{ \varphi \} \) is unsatisfiable.\n\nFor the reverse direction, suppose \( \Gamma \cup \{ \neg \varphi \} \) is unsatisfiable. So for every structure \( \mathfrak{M} \), either \( \mathfrak{M} \vDash \Gamma \) or \( \mathfrak{M} \vDash \varphi \) . Hence, for every structure \( \mathfrak{M} \) with \( \mathfrak{M} \vDash \Gamma ,\mathfrak{M} \vDash \varphi \), so \( \Gamma \vDash \varphi \) .
|
Yes
|
Proposition 14.52. If \( \Gamma \subseteq {\Gamma }^{\prime } \) and \( \Gamma \vDash \varphi \), then \( {\Gamma }^{\prime } \vDash \varphi \) .
|
Proof. Suppose that \( \Gamma \subseteq {\Gamma }^{\prime } \) and \( \Gamma \vDash \varphi \) . Let \( \mathfrak{M} \) be such that \( \mathfrak{M} \vDash {\Gamma }^{\prime } \) ; then \( \mathfrak{M} \vDash \Gamma \), and since \( \Gamma \vDash \varphi \), we get that \( \mathfrak{M} \vDash \varphi \) . Hence, whenever \( \mathfrak{M} \vDash {\Gamma }^{\prime },\mathfrak{M} \vDash \varphi \) , so \( {\Gamma }^{\prime } \vDash \varphi \) .
|
Yes
|
Theorem 14.53 (Semantic Deduction Theorem). \( \Gamma \cup \{ \varphi \} \vDash \psi \) iff \( \Gamma \vDash \varphi \rightarrow \psi \) .
|
Proof. For the forward direction, let \( \Gamma \cup \{ \varphi \} \vDash \psi \) and let \( \mathfrak{M} \) be a structure so that \( \mathfrak{M} \vDash \Gamma \) . If \( \mathfrak{M} \vDash \varphi \), then \( \mathfrak{M} \vDash \Gamma \cup \{ \varphi \} \), so since \( \Gamma \cup \{ \varphi \} \) entails \( \psi \), we get \( \mathfrak{M} \vDash \psi \) . Therefore, \( \mathfrak{M} \vDash \varphi \rightarrow \psi \), so \( \Gamma \vDash \varphi \rightarrow \psi \) .\n\nFor the reverse direction, let \( \Gamma \vDash \varphi \rightarrow \psi \) and \( \mathfrak{M} \) be a structure so that \( \mathfrak{M} \vDash \) \( \Gamma \cup \{ \varphi \} \) . Then \( \mathfrak{M} \vDash \Gamma \), so \( \mathfrak{M} \vDash \varphi \rightarrow \psi \), and since \( \mathfrak{M} \vDash \varphi ,\mathfrak{M} \vDash \psi \) . Hence, whenever \( \mathfrak{M} \vDash \Gamma \cup \{ \varphi \} ,\mathfrak{M} \vDash \psi \), so \( \Gamma \cup \{ \varphi \} \vDash \psi \) .
|
Yes
|
Proposition 14.54. Let \( \mathfrak{M} \) be a structure, and \( \varphi \left( x\right) \) a formula with one free variable \( x \), and \( t \) a closed term. Then:\n\n1. \( \varphi \left( t\right) \vDash \exists {x\varphi }\left( x\right) \)\n\n2. \( \forall {x\varphi }\left( x\right) \vDash \varphi \left( t\right) \)
|
Proof. 1. Suppose \( \mathfrak{M} \vDash \varphi \left( t\right) \) . Let \( s \) be a variable assignment with \( s\left( x\right) = \) \( {\operatorname{Val}}^{\mathfrak{M}}\left( t\right) \) . Then \( \mathfrak{M}, s \vDash \varphi \left( t\right) \) since \( \varphi \left( t\right) \) is a sentence. By Proposition 14.46, \( \mathfrak{M}, s \vDash \varphi \left( x\right) \) . By Proposition 14.42, \( \mathfrak{M} \vDash \exists {x\varphi }\left( x\right) \).\n\n2. Exercise.
|
No
|
The theory of strict linear orders in the language \( {\mathcal{L}}_{ < } \) is axiomatized by the set\n\n\[ \forall x\neg x < x \]\n\n\[ \forall x\forall y\left( {\left( {x < y \vee y < x}\right) \vee x = y}\right) ,\]\n\n\[ \forall x\forall y\forall z\left( {\left( {x < y \land y < z}\right) \rightarrow x < z}\right) \]
|
It completely captures the intended structures: every strict linear order is a model of this axiom system, and vice versa, if \( R \) is a linear order on a set \( X \) , then the structure \( \mathfrak{M} \) with \( \left| \mathfrak{M}\right| = X \) and \( { < }^{\mathfrak{M}} = R \) is a model of this theory.
|
Yes
|
The theory of Peano arithmetic is axiomatized by the following sentences in the language of arithmetic \( {\mathcal{L}}_{A} \) .
|
\[ \forall x\forall y\left( {{x}^{\prime } = {y}^{\prime } \rightarrow x = y}\right) \] \[ \forall x\mathrm{o} \neq {x}^{\prime } \] \[ \forall x\left( {x + 0}\right) = x \] \[ \forall x\forall y\left( {x + {y}^{\prime }}\right) = {\left( x + y\right) }^{\prime } \] \[ \forall x\left( {x \times 0}\right) = 0 \] \[ \forall x\forall y\left( {x \times {y}^{\prime }}\right) = \left( {\left( {x \times y}\right) + x}\right) \] \[ \left. {\forall x\forall y\left( {x < y \leftrightarrow \exists z\left( {{z}^{\prime } + x}\right) = y}\right) }\right) \] plus all sentences of the form \[ \left( {\varphi \left( 0\right) \land \forall x\left( {\varphi \left( x\right) \rightarrow \varphi \left( {x}^{\prime }\right) }\right) }\right) \rightarrow \forall {x\varphi }\left( x\right) \] Since there are infinitely many sentences of the latter form, this axiom system is infinite. The latter form is called the induction schema. (Actually, the induction schema is a bit more complicated than we let on here.) The last axiom is an explicit definition of \( < \) .
|
Yes
|
Show that the comprehension principle is inconsistent by giving a derivation that shows\n\n\[ \exists y\forall x\left( {x \in y \leftrightarrow x \notin x}\right) \vdash \bot . \]
|
It may help to first show \( \left( {A \rightarrow \neg A}\right) \land \left( {\neg A \rightarrow A}\right) \vdash \bot \) .
|
No
|
Every initial sequent, e.g., \( \chi \Rightarrow \chi \) is a derivation.
|
We can obtain a new derivation from this by applying, say, the WL rule,\n\n\[ \frac{\Gamma \Rightarrow \Delta }{\varphi ,\Gamma \Rightarrow \Delta }\mathrm{{WL}} \]\n\nThe rule, however, is meant to be general: we can replace the \( \varphi \) in the rule with any sentence, e.g., also with \( \theta \) . If the premise matches our initial sequent \( \chi \Rightarrow \chi \), that means that both \( \Gamma \) and \( \Delta \) are just \( \chi \), and the conclusion would then be \( \theta ,\chi \Rightarrow \chi \) . So, the following is a derivation:\n\n\[ \frac{\chi \Rightarrow \chi }{\theta ,\chi \Rightarrow \chi }\mathrm{{WL}} \]\n\nWe can now apply another rule, say XL, which allows us to switch two sentences on the left. So, the following is also a correct derivation:\n\n\[ \begin{aligned} \chi & \Rightarrow \chi \\ \frac{\theta ,\chi \Rightarrow \chi }{\chi ,\theta } & \Rightarrow \chi \end{aligned}\begin{array}{l} \mathrm{{WL}} \\ \mathrm{{XL}} \end{array} \]\n\nIn this application of the rule, which was given as\n\n\[ \frac{\Gamma ,\varphi ,\psi ,\Pi \Rightarrow \Delta }{\Gamma ,\psi ,\varphi ,\Pi \Rightarrow \Delta ,}\mathrm{{XL}} \]\n\nboth \( \Gamma \) and \( \Pi \) were empty, \( \Delta \) is \( \chi \), and the roles of \( \varphi \) and \( \psi \) are played by \( \theta \) and \( \chi \), respectively.
|
Yes
|
Example 17.5. Give an LK-derivation for the sequent \( \varphi \land \psi \Rightarrow \varphi \) .
|
We begin by writing the desired end-sequent at the bottom of the derivation.\n\n\[ \varphi \land \psi \Rightarrow \varphi \]\n\nNext, we need to figure out what kind of inference could have a lower sequent of this form. This could be a structural rule, but it is a good idea to start by looking for a logical rule. The only logical connective occurring in the lower sequent is \( \land \), so we’re looking for an \( \land \) rule, and since the \( \land \) symbol occurs in the antecedent, we’re looking at the \( \land \mathrm{L} \) rule.\n\n\[ \varphi \land \psi \Rightarrow \varphi \]\n\nThere are two options for what could have been the upper sequent of the \( \land \mathrm{L} \) inference: we could have an upper sequent of \( \varphi \Rightarrow \varphi \), or of \( \psi \Rightarrow \varphi \) . Clearly, \( \varphi \Rightarrow \varphi \) is an initial sequent (which is a good thing), while \( \psi \Rightarrow \varphi \) is not derivable in general. We fill in the upper sequent:\n\n\[ \frac{\varphi \Rightarrow \varphi }{\varphi \land \psi \Rightarrow \varphi } \land \mathrm{L} \]\n\nWe now have a correct LK-derivation of the sequent \( \varphi \land \psi \Rightarrow \varphi \) .
|
Yes
|
Suppose we want to prove \( \Rightarrow \varphi \vee \neg \varphi \) .
|
Applying VR backwards would give us one of these two derivations:\n\n\[ \n\begin{array}{l} \Rightarrow \varphi \\ \Rightarrow \varphi \vee \neg \varphi \vee \mathrm{R} \\ \end{array} \]\n\nNeither of these of course ends in an initial sequent. The trick is to realize that the contraction rule allows us to combine two copies of a sentence into one-and when we're searching for a proof, i.e., going from bottom to top, we can keep a copy of \( \varphi \vee \neg \varphi \) in the premise, e.g.,\n\n\[ \n\begin{array}{l} \Rightarrow \varphi \vee \neg \varphi ,\varphi \\ \Rightarrow \varphi \vee \neg \varphi ,\varphi \vee \neg \varphi \\ \Rightarrow \varphi \vee \neg \varphi \end{array}\mathrm{{CR}} \]\n\nNow we can apply \( \vee \mathrm{R} \) a second time, and also get \( \neg \varphi \), which leads to a complete derivation.\n\n\[ \n\begin{array}{l} \varphi \Rightarrow \varphi \\ \Rightarrow \varphi ,\neg \varphi \\ \Rightarrow \varphi ,\varphi \vee \neg \varphi \\ \Rightarrow \varphi \vee \neg \varphi ,\varphi \vee \neg \varphi \\ \Rightarrow \varphi \vee \neg \varphi ,\varphi \vee \neg \varphi \\ \Rightarrow \varphi \vee \neg \varphi \end{array}\begin{array}{l} \neg \mathrm{R} \\ \mathrm{{XR}} \\ \mathrm{{CR}} \end{array} \]\n
|
Yes
|
Give an LK-derivation of the sequent \( \exists x\neg \varphi \left( x\right) \Rightarrow \neg \forall {x\varphi }\left( x\right) \) .
|
Starting as usual, we write\n\n\[ \exists x\neg \varphi \left( x\right) \Rightarrow \neg \forall {x\varphi }\left( x\right) \]\n\nWe could either carry out the \( \exists \mathrm{L} \) rule or the \( \neg \mathrm{R} \) rule. Since the \( \exists \mathrm{L} \) rule is subject to the eigenvariable condition, it's a good idea to take care of it sooner rather than later, so we'll do that one first.\n\n\[ \frac{\neg \varphi \left( a\right) \Rightarrow \neg \forall {x\varphi }\left( x\right) }{\exists x\neg \varphi \left( x\right) \Rightarrow \neg \forall {x\varphi }\left( x\right) }\exists \mathrm{L} \]\n\nApplying the \( \neg \mathrm{L} \) and \( \neg \mathrm{R} \) rules backwards, we get\n\n\[ \begin{aligned} \frac{\forall {x\varphi }\left( x\right) \Rightarrow \varphi \left( a\right) }{\neg \varphi \left( a\right) ,\forall {x\varphi }\left( x\right) \Rightarrow \text{ 对 }}\neg \mathrm{L} & \\ \frac{\forall {x\varphi }\left( x\right) ,\neg \varphi \left( a\right) \Rightarrow \text{ 对 }}{\neg \varphi \left( a\right) \Rightarrow \neg \forall {x\varphi }\left( x\right) }\neg \mathrm{R} & \\ \exists \mathrm{L}\text{ 对 } & \end{aligned} \]\n\nAt this point, our only option is to carry out the \( \forall \mathrm{L} \) rule. Since this rule is not subject to the eigenvariable restriction, we're in the clear. Remember, we want to try and obtain an initial sequent (of the form \( \varphi \left( a\right) \Rightarrow \varphi \left( a\right) \) ), so we should choose \( a \) as our argument for \( \varphi \) when we apply the rule.\n\n\[ \begin{aligned} \frac{\varphi \left( a\right) \Rightarrow \varphi \left( a\right) }{\forall {x\varphi }\left( x\right) \Rightarrow \varphi \left( a\right) }\forall \mathrm{L} & \\ \frac{\neg \varphi \left( a\right) ,\forall {x\varphi }\left( x\right) \Rightarrow }{\forall {x\varphi }\left( x\right) ,\neg \varphi \left( a\right) \Rightarrow }\mathrm{{XL}} & \\ \frac{\neg \varphi \left( a\right) \Rightarrow \neg \forall {x\varphi }\left( x\right) }{\exists x\neg \varphi \left( x\right) \Rightarrow \neg \forall {x\varphi }\left( x\right) }\exists \mathrm{L} & \end{aligned} \]\n\nIt is important, especially when dealing with quantifiers, to double check at this point that the eigenvariable condition has not been violated. Since the only rule we applied that is subject to the eigenvariable condition was \( \exists \mathrm{L} \) , and the eigenvariable \( a \) does not occur in its lower sequent (the end-sequent), this is a correct derivation.
|
Yes
|
Proposition 17.13 (Reflexivity). If \( \varphi \in \Gamma \), then \( \Gamma \vdash \varphi \) .
|
Proof. The initial sequent \( \varphi \Rightarrow \varphi \) is derivable, and \( \{ \varphi \} \subseteq \Gamma \) .
|
Yes
|
Proposition 17.14 (Monotony). If \( \Gamma \subseteq \Delta \) and \( \Gamma \vdash \varphi \), then \( \Delta \vdash \varphi \) .
|
Proof. Suppose \( \Gamma \vdash \varphi \), i.e., there is a finite \( {\Gamma }_{0} \subseteq \Gamma \) such that \( {\Gamma }_{0} \Rightarrow \varphi \) is derivable. Since \( \Gamma \subseteq \Delta \), then \( {\Gamma }_{0} \) is also a finite subset of \( \Delta \) . The derivation of \( {\Gamma }_{0} \Rightarrow \varphi \) thus also shows \( \mathit{Δ} \vdash \varphi \) .
|
Yes
|
Proposition 17.15 (Transitivity). If \( \Gamma \vdash \varphi \) and \( \{ \varphi \} \cup \Delta \vdash \psi \), then \( \Gamma \cup \Delta \vdash \psi \) .
|
Proof. If \( \Gamma \vdash \varphi \), there is a finite \( {\Gamma }_{0} \subseteq \Gamma \) and a derivation \( {\pi }_{0} \) of \( {\Gamma }_{0} \Rightarrow \varphi \) . If \( \{ \varphi \} \cup \Delta \vdash \psi \), then for some finite subset \( {\Delta }_{0} \subseteq \Delta \), there is a derivation \( {\pi }_{1} \) of \( \varphi ,{\Delta }_{0} \Rightarrow \psi \) . Consider the following derivation:\n\n\n\nSince \( {\Gamma }_{0} \cup {\Delta }_{0} \subseteq \Gamma \cup \Delta \), this shows \( \Gamma \cup \Delta \vdash \psi \) .
|
Yes
|
Proposition 17.16. \( \Gamma \) is inconsistent iff \( \Gamma \vdash \varphi \) for every sentence \( \varphi \) .
|
Proof. Exercise.
|
No
|
Proposition 17.17 (Compactness). 1. If \( \Gamma \vdash \varphi \) then there is a finite subset \( {\Gamma }_{0} \subseteq \Gamma \) such that \( {\Gamma }_{0} \vdash \varphi \) . 2. If every finite subset of \( \Gamma \) is consistent, then \( \Gamma \) is consistent.
|
Proof. 1. If \( \Gamma \vdash \varphi \), then there is a finite subset \( {\Gamma }_{0} \subseteq \Gamma \) such that the sequent \( {\Gamma }_{0} \Rightarrow \varphi \) has a derivation. Consequently, \( {\Gamma }_{0} \vdash \varphi \) . 2. If \( \Gamma \) is inconsistent, there is a finite subset \( {\Gamma }_{0} \subseteq \Gamma \) such that LK derives \( {\Gamma }_{0} \Rightarrow \) . But then \( {\Gamma }_{0} \) is a finite subset of \( \Gamma \) that is inconsistent.
|
Yes
|
Proposition 17.18. If \( \Gamma \vdash \varphi \) and \( \Gamma \cup \{ \varphi \} \) is inconsistent, then \( \Gamma \) is inconsistent.
|
Proof. There are finite \( {\Gamma }_{0} \) and \( {\Gamma }_{1} \subseteq \Gamma \) such that \( \mathbf{{LK}} \) derives \( {\Gamma }_{0} \Rightarrow \varphi \) and \( \varphi ,{\Gamma }_{1} \Rightarrow \) . Let the LK-derivation of \( {\Gamma }_{0} \Rightarrow \varphi \) be \( {\pi }_{0} \) and the LK-derivation of \( {\Gamma }_{1},\varphi \Rightarrow \) be \( {\pi }_{1} \) . We can then derive\n\n\n\nSince \( {\Gamma }_{0} \subseteq \Gamma \) and \( {\Gamma }_{1} \subseteq \Gamma ,{\Gamma }_{0} \cup {\Gamma }_{1} \subseteq \Gamma \), hence \( \Gamma \) is inconsistent.
|
Yes
|
Proposition 17.19. \( \Gamma \vdash \varphi \) iff \( \Gamma \cup \{ \neg \varphi \} \) is inconsistent.
|
Proof. First suppose \( \Gamma \vdash \varphi \), i.e., there is a derivation \( {\pi }_{0} \) of \( \Gamma \Rightarrow \varphi \) . By adding a \( \neg \mathrm{L} \) rule, we obtain a derivation of \( \neg \varphi ,\Gamma \Rightarrow \), i.e., \( \Gamma \cup \{ \neg \varphi \} \) is inconsistent.\n\nIf \( \Gamma \cup \{ \neg \varphi \} \) is inconsistent, there is a derivation \( {\pi }_{1} \) of \( \neg \varphi ,\Gamma \Rightarrow \) . The following is a derivation of \( \Gamma \Rightarrow \varphi \) :\n\n
|
Yes
|
Proposition 17.20. If \( \Gamma \vdash \varphi \) and \( \neg \varphi \in \Gamma \), then \( \Gamma \) is inconsistent.
|
Proof. Suppose \( \Gamma \vdash \varphi \) and \( \neg \varphi \in \Gamma \) . Then there is a derivation \( \pi \) of a sequent \( {\Gamma }_{0} \Rightarrow \varphi \) . The sequent \( \neg \varphi ,{\Gamma }_{0} \Rightarrow \) is also derivable:\n\n\[ \begin{array}{l} {\Gamma }_{0} \Rightarrow \varphi \;\frac{\varphi \Rightarrow \varphi }{\varphi ,\neg \varphi \Rightarrow \text{ XL }} \\ \end{array} \]\n\nSince \( \neg \varphi \in \Gamma \) and \( {\Gamma }_{0} \subseteq \Gamma \), this shows that \( \Gamma \) is inconsistent.
|
Yes
|
Proposition 17.21. If \( \Gamma \cup \{ \varphi \} \) and \( \Gamma \cup \{ \neg \varphi \} \) are both inconsistent, then \( \Gamma \) is inconsistent.
|
Proof. There are finite sets \( {\Gamma }_{0} \subseteq \Gamma \) and \( {\Gamma }_{1} \subseteq \Gamma \) and LK-derivations \( {\pi }_{0} \) and \( {\pi }_{1} \) of \( \varphi ,{\Gamma }_{0} \Rightarrow \) and \( \neg \varphi ,{\Gamma }_{1} \Rightarrow \), respectively. We can then derive \n\nSince \( {\Gamma }_{0} \subseteq \Gamma \) and \( {\Gamma }_{1} \subseteq \Gamma ,{\Gamma }_{0} \cup {\Gamma }_{1} \subseteq \Gamma \) . Hence \( \Gamma \) is inconsistent.
|
Yes
|
Proposition 17.22. 1. Both \( \varphi \land \psi \vdash \varphi \) and \( \varphi \land \psi \vdash \psi \) .\n\n2. \( \varphi ,\psi \vdash \varphi \land \psi \) .
|
Proof. 1. Both sequents \( \varphi \land \psi \Rightarrow \varphi \) and \( \varphi \land \psi \Rightarrow \psi \) are derivable:\n\n\[ \frac{\varphi \Rightarrow \varphi }{\varphi \land \psi \Rightarrow \varphi } \land \mathrm{L}\;\frac{\psi \Rightarrow \psi }{\varphi \land \psi \Rightarrow \psi } \land \mathrm{L} \]\n\n2. Here is a derivation of the sequent \( \varphi ,\psi \Rightarrow \varphi \land \psi \) :\n\n\[ \frac{\varphi \Rightarrow \varphi \;\psi \Rightarrow \psi }{\varphi ,\psi \Rightarrow \varphi \land \psi } \land \mathrm{R} \]
|
Yes
|
Proposition 17.23. 1. \( \varphi \vee \psi ,\neg \varphi ,\neg \psi \) is inconsistent.
|
Proof. 1. We give a derivation of the sequent \( \varphi \vee \psi ,\neg \varphi ,\neg \psi \Rightarrow \) :\n\n\[ \n\frac{\varphi \;\Rightarrow \;\varphi }{{\neg \varphi },\varphi \;\Rightarrow \;}\neg \text{L}\;\frac{\psi \;\Rightarrow \;\psi }{{\neg \psi },\psi \;\Rightarrow \;}\neg \text{L} \]\n\n\[ \n\frac{\varphi ,\neg \varphi ,\neg \psi \Rightarrow \psi ,\neg \varphi ,\neg \psi \Rightarrow }{\varphi \vee \psi ,\neg \varphi ,\neg \psi \Rightarrow } \vee \mathrm{L} \]\n\n(Recall that double inference lines indicate several weakening, contraction, and exchange inferences.)
|
Yes
|
Proposition 17.24. 1. \( \varphi ,\varphi \rightarrow \psi \vdash \psi \) . 2. Both \( \neg \varphi \vdash \varphi \rightarrow \psi \) and \( \psi \vdash \varphi \rightarrow \psi \) .
|
Proof. 1. The sequent \( \varphi \rightarrow \psi ,\varphi \Rightarrow \psi \) is derivable: \[ \frac{\varphi \Rightarrow \varphi \;\psi \Rightarrow \psi }{\varphi \rightarrow \psi ,\varphi \Rightarrow \psi } \rightarrow \mathrm{L} \] 2. Both sequents \( \neg \varphi \Rightarrow \varphi \rightarrow \psi \) and \( \psi \Rightarrow \varphi \rightarrow \psi \) are derivable: \[ \begin{array}{l} \frac{\varphi \Rightarrow \varphi }{\neg \varphi ,\varphi \Rightarrow \psi }\neg \mathrm{L} \\ \frac{\varphi ,\neg \varphi \Rightarrow \psi \mathrm{L}}{\varphi ,\neg \varphi \Rightarrow \psi }\mathrm{{WR}} \\ \frac{\varphi ,\neg \varphi \Rightarrow \varphi \rightarrow \psi }{\neg \varphi \Rightarrow \varphi \rightarrow \psi } \rightarrow \mathrm{R} \\ \frac{\varphi ,\psi \Rightarrow \varphi \rightarrow \psi }{\psi \Rightarrow \varphi \Rightarrow \psi } \rightarrow \mathrm{R} \\ \end{array} \]
|
Yes
|
Theorem 17.25. If \( c \) is a constant not occurring in \( \Gamma \) or \( \varphi \left( x\right) \) and \( \Gamma \vdash \varphi \left( c\right) \), then \( \Gamma \vdash \forall {x\varphi }\left( x\right) \) .
|
Proof. Let \( {\pi }_{0} \) be an LK-derivation of \( {\Gamma }_{0} \Rightarrow \varphi \left( c\right) \) for some finite \( {\Gamma }_{0} \subseteq \Gamma \) . By adding a \( \forall \mathrm{R} \) inference, we obtain a proof of \( {\Gamma }_{0} \Rightarrow \forall {x\varphi }\left( x\right) \), since \( c \) does not occur in \( \Gamma \) or \( \varphi \left( x\right) \) and thus the eigenvariable condition is satisfied.
|
Yes
|
Proposition 17.26. 1. \( \varphi \left( t\right) \vdash \exists {x\varphi }\left( x\right) \) . 2. \( \forall {x\varphi }\left( x\right) \vdash \varphi \left( t\right) \) .
|
Proof. 1. The sequent \( \varphi \left( t\right) \Rightarrow \exists {x\varphi }\left( x\right) \) is derivable: \[ \frac{\varphi \left( t\right) \Rightarrow \varphi \left( t\right) }{\varphi \left( t\right) \Rightarrow \exists {x\varphi }\left( x\right) }\exists \mathrm{R} \] 2. The sequent \( \forall {x\varphi }\left( x\right) \Rightarrow \varphi \left( t\right) \) is derivable: \[ \begin{aligned} \varphi \left( t\right) & \Rightarrow \varphi \left( t\right) \\ \forall {x\varphi }\left( x\right) & \Rightarrow \varphi \left( t\right) \end{aligned} \]
|
Yes
|
Corollary 17.30. If \( \Gamma \vdash \varphi \) then \( \Gamma \vDash \varphi \) .
|
Proof. If \( \Gamma \vdash \varphi \) then for some finite subset \( {\Gamma }_{0} \subseteq \Gamma \), there is a derivation of \( {\Gamma }_{0} \Rightarrow \varphi \) . By Theorem 17.28, every structure \( \mathfrak{M} \) either makes some \( \psi \in {\Gamma }_{0} \) false or makes \( \varphi \) true. Hence, if \( \mathfrak{M} \vDash \Gamma \) then also \( \mathfrak{M} \vDash \varphi \) .
|
Yes
|
Corollary 17.31. If \( \Gamma \) is satisfiable, then it is consistent.
|
Proof. We prove the contrapositive. Suppose that \( \Gamma \) is not consistent. Then there is a finite \( {\Gamma }_{0} \subseteq \Gamma \) and a derivation of \( {\Gamma }_{0} \Rightarrow \) . By Theorem 17.28, \( {\Gamma }_{0} \Rightarrow \) is valid. In other words, for every structure \( \mathfrak{M} \), there is \( \chi \in {\Gamma }_{0} \) so that \( \mathfrak{M} \mathrel{\text{\vDash \not{} }} \chi \) , and since \( {\Gamma }_{0} \subseteq \Gamma \), that \( \chi \) is also in \( \Gamma \) . Thus, no \( \mathfrak{M} \) satisfies \( \Gamma \), and \( \Gamma \) is not satisfiable.
|
Yes
|
If \( s \) and \( t \) are closed terms, then \( s = t,\varphi \left( s\right) \vdash \varphi \left( t\right) \) :
|
\n\[
\begin{aligned} \varphi \left( s\right) & \Rightarrow \varphi \left( s\right) \\ s = t,\varphi \left( s\right) & \Rightarrow \varphi \left( s\right) \\ s = t,\varphi \left( s\right) & \Rightarrow \varphi \left( t\right) \end{aligned}\text{ WL }
\]
|
Yes
|
Proposition 17.34. LK with initial sequents and rules for identity is sound.
|
Proof. Initial sequents of the form \( \Rightarrow t = t \) are valid, since for every structure \( \mathfrak{M},\mathfrak{M} \vDash t = t \) . (Note that we assume the term \( t \) to be closed, i.e., it contains no variables, so variable assignments are irrelevant).\n\nSuppose the last inference in a derivation is \( = \) . Then the premise is \( {t}_{1} = \) \( {t}_{2},\Gamma \Rightarrow \Delta ,\varphi \left( {t}_{1}\right) \) and the conclusion is \( {t}_{1} = {t}_{2},\Gamma \Rightarrow \Delta ,\varphi \left( {t}_{2}\right) \) . Consider a structure \( \mathfrak{M} \) . We need to show that the conclusion is valid, i.e., if \( \mathfrak{M} \vDash {t}_{1} = {t}_{2} \) and \( \mathfrak{M} \vDash \Gamma \), then either \( \mathfrak{M} \vDash \chi \) for some \( \chi \in \Delta \) or \( \mathfrak{M} \vDash \varphi \left( {t}_{2}\right) \) .\n\nBy induction hypothesis, the premise is valid. This means that if \( \mathfrak{M} \vDash \) \( {t}_{1} = {t}_{2} \) and \( \mathfrak{M} \vDash \Gamma \) either (a) for some \( \chi \in \Delta ,\mathfrak{M} \vDash \chi \) or (b) \( \mathfrak{M} \vDash \varphi \left( {t}_{1}\right) \) . In case (a) we are done. Consider case (b). Let \( s \) be a variable assignment with \( s\left( x\right) = {\operatorname{Val}}^{\mathfrak{M}}\left( {t}_{1}\right) \) . By Proposition 14.41, \( \mathfrak{M}, s \vDash \varphi \left( {t}_{1}\right) \) . Since \( s{ \sim }_{x}s \) , by Proposition \( {14.46},\mathfrak{M}, s \vDash \varphi \left( x\right) \) . since \( \mathfrak{M} \vDash {t}_{1} = {t}_{2} \), we have \( {\operatorname{Val}}^{\mathfrak{M}}\left( {t}_{1}\right) = \) \( {\operatorname{Val}}^{\mathfrak{M}}\left( {t}_{2}\right) \), and hence \( s\left( x\right) = {\operatorname{Val}}^{\mathfrak{M}}\left( {t}_{2}\right) \) . By applying Proposition 14.46 again, we also have \( \mathfrak{M}, s \vDash \varphi \left( {t}_{2}\right) \) . By Proposition 14.41, \( \mathfrak{M} \vDash \varphi \left( {t}_{2}\right) \) .
|
Yes
|
Every assumption on its own is a derivation. So, e.g., \( \chi \) by itself is a derivation, and so is \( \theta \) by itself. We can obtain a new derivation from these by applying, say, the \( \land \) Intro rule,
|
\[ \frac{\varphi }{\varphi \land \psi } \land \text{ Intro } \] These rules are meant to be general: we can replace the \( \varphi \) and \( \psi \) in it with any sentences, e.g., by \( \chi \) and \( \theta \) . Then the conclusion would be \( \chi \land \theta \), and so \[ \frac{\chi \;\theta }{\chi \land \theta }\text{ AIntro } \] is a correct derivation. Of course, we can also switch the assumptions, so that \( \theta \) plays the role of \( \varphi \) and \( \chi \) that of \( \psi \) . Thus, \[ \frac{\theta }{\theta \land \chi } \land \text{Intro} \] is also a correct derivation.
|
Yes
|
Let’s give a derivation of the sentence \( \left( {\varphi \land \psi }\right) \rightarrow \varphi \) .
|
\[ \frac{\frac{{\left\lbrack \varphi \land \psi \right\rbrack }^{1}}{\varphi } \land \text{Elim}}{1\frac{(\varphi \land \psi ) \rightarrow \varphi }{(\varphi \land \psi ) \rightarrow \varphi } \rightarrow \text{Intro}} \]
|
Yes
|
For instance, suppose we want to derive \( \varphi \vee \neg \varphi \) . Our usual strategy would be to attempt to derive \( \varphi \vee \neg \varphi \) using VIntro. But this would require us to derive either \( \varphi \) or \( \neg \varphi \) from no assumptions, and this can’t be done. \( { \bot }_{C} \) to the rescue!
|
Now we’re looking for a derivation of \( \bot \) from \( \neg \left( {\varphi \vee \neg \varphi }\right) \) . Since \( \bot \) is the conclusion of \( \neg \) Elim we might try that:  Our strategy for finding a derivation of \( \neg \varphi \) calls for an application of \( \neg \) Intro:  Here, we can get \( \bot \) easily by applying \( \neg \) Elim to the assumption \( \neg \left( {\varphi \vee \neg \varphi }\right) \) and \( \varphi \vee \neg \varphi \) which follows from our new assumption \( \varphi \) by VIntro:  On the right side we use the same strategy, except we get \( \varphi \) by \( { \bot }_{C} \) : \[ \frac{\frac{{\left\lbrack \neg \left( \varphi \vee \neg \varphi \right) \right\rbrack }^{1}}{2\frac{\bot }{\neg \varphi }\neg \text{ Intro }}\mathop{\neg }\limits^{\frac{{\left\lbrack \varphi \right\rbrack }^{2}}{\varphi \vee \neg \varphi }}\neg \text{ Elim }}{\frac{1\frac{\bot }{\varphi \vee \neg \varphi } \bot c}{1\frac{\bot }{\varphi \vee \neg \varphi }} \bot c}\xrightarrow[]{\frac{{\left\lbrack \neg (\varphi \vee \neg \varphi )\right\rbrack }^{1}}{3\frac{\bot }{\varphi }}}\xrightarrow[]{\frac{{\left\lbrack \neg \varphi \right\rbrack }^{3}}{2\vee \neg \varphi }}\xrightarrow[]{\text{ VIntro }} \]
|
Yes
|
Let’s see how we’d give a derivation of the formula \( \exists x\neg \varphi \left( x\right) \rightarrow \neg \forall {x\varphi }\left( x\right) \) .
|
\[ \exists x\neg \varphi \left( x\right) \rightarrow \neg \forall {x\varphi }\left( x\right) \] We start by writing down what it would take to justify that last step using the \( \rightarrow \) Intro rule. \[ \frac{{\left\lbrack \exists x\neg \varphi \left( x\right) \right\rbrack }^{1}\;\neg \forall {x\varphi }\left( x\right) }{\frac{\neg \forall {x\varphi }\left( x\right) }{\exists x\neg \varphi \left( x\right) \rightarrow \neg \forall {x\varphi }\left( x\right) } \rightarrow \text{ Intro }} \] In order to derive \( \neg \forall {x\varphi }\left( x\right) \), we will attempt to use the \( \neg \) Intro rule: this requires that we derive a contradiction, possibly using \( \forall {x\varphi }\left( x\right) \) as an additional assumption. Of course, this contradiction may involve the assumption \( \neg \varphi \left( a\right) \) which will be discharged by the \( \rightarrow \) Intro inference. We can set it up as follows: \[ \begin{array}{l} 2\frac{{\left\lbrack \neg \varphi \left( a\right) \right\rbrack }^{2}\;\frac{{\left\lbrack \forall x\varphi \left( x\right) \right\rbrack }^{3}}{\varphi \left( a\right) }\neg \text{ Elim }}{3\frac{1}{\neg \forall {x\varphi }\left( x\right) }\neg \text{ Intro }}\neg \text{ Elim } \\ 2\frac{\left\lbrack {\exists x\neg \varphi \left( x\right) }\right\rbrack {}^{1}}{\exists x\neg \varphi \left( x\right) \rightarrow \neg \forall {x\varphi }\left( x\right) }\exists \text{ Elim } \\ \end{array} \] It is important, especially when dealing with quantifiers, to double check at this point that the eigenvariable condition has not been violated. Since the only rule we applied that is subject to the eigenvariable condition was \( \exists \) Elim, and the eigenvariable \( a \) does not occur in any assumptions it depends on, this is a correct derivation.
|
Yes
|
Let’s see how we’d give a derivation of the formula \( \exists {x\chi }\left( {x, b}\right) \) from the assumptions \( \exists x\left( {\varphi \left( x\right) \land \psi \left( x\right) }\right) \) and \( \forall x\left( {\psi \left( x\right) \rightarrow \chi \left( {x, b}\right) }\right) \).
|
\[ \exists {x\chi }\left( {x, b}\right) \] \n\nWe have two premises to work with. To use the first, i.e., try to find a derivation of \( \exists {x\chi }\left( {x, b}\right) \) from \( \exists x\left( {\varphi \left( x\right) \land \psi \left( x\right) }\right) \) we would use the \( \exists \) Elim rule. Since it has an eigenvariable condition, we will apply that rule first. We get the following: \n\n\[ {}_{1}\frac{\exists x\left( {\varphi \left( x\right) \land \psi \left( x\right) }\right) \;\exists x\dot{\chi \left( {x, b}\right) }}{\exists {x\chi }\left( {x, b}\right) }\exists \text{Elim } \] \n\nThe two assumptions we are working with share \( \psi \) . It may be useful at this point to apply \( \land \) Elim to separate out \( \psi \left( a\right) \) . \n\n\[ \frac{{\left\lbrack \varphi \left( a\right) \land \psi \left( a\right) \right\rbrack }^{1}}{\psi \left( a\right) } \land \operatorname{Elim} \] \n\n\[ \frac{\exists x\left( {\varphi \left( x\right) \land \psi \left( x\right) }\right) \;\exists {x\chi }\dot{\left( x, b\right) }}{\exists {x\chi }\left( {x, b}\right) }\exists \text{Elim } \] \n\nThe second assumption we have to work with is \( \forall x\left( {\psi \left( x\right) \rightarrow \chi \left( {x, b}\right) }\right) \) . Since there is no eigenvariable condition we can instantiate \( x \) with the constant symbol \( a \) using \( \forall \) Elim to get \( \psi \left( a\right) \rightarrow \chi \left( {a, b}\right) \) . We now have both \( \psi \left( a\right) \rightarrow \chi \left( {a, b}\right) \) and \( \psi \left( a\right) \) . Our next move should be a straightforward application of the \( \rightarrow \) Elim rule. \n\n\[ \frac{\forall x\left( {\psi \left( x\right) \rightarrow \chi \left( {x, b}\right) }\right) }{\frac{\psi \left( a\right) \rightarrow \chi \left( {a, b}\right) }{\chi \left( {a, b}\right) }}\forall \operatorname{Elim}\;\frac{\frac{{\left\lbrack \varphi \left( a\right) \land \psi \left( a\right) \right\rbrack }^{1}}{\psi \left( a\right) } \land \operatorname{Elim}}{\chi \left( {a, b}\right) } \rightarrow \operatorname{Elim} \] \n\n\[ \frac{\exists x\left( {\varphi \left( x\right) \land \psi \left( x\right) }\right) \;\exists {x\chi }\dot{\left( x, b\right) }}{\exists {x\chi }\left( {x, b}\right) }\exists \text{Elim } \] \n\nWe are so close! One application of \( \exists \) Intro and we have reached our goal. \n\n\[ \begin{array}{l} \frac{\forall x\left( {\psi \left( x\right) \rightarrow \chi \left( {x, b}\right) }\right) }{\psi \left( a\right) \rightarrow \chi \left( {a, b}\right) }\text{ &Elim }\;\frac{{\left\lbrack \varphi \left( a\right) \land \psi \left( a\right) \right\rbrack }^{1}}{\psi \left( a\right) } \land \text{ Elim } \\ \frac{\exists x\left( {\varphi \left( x\right) \land \psi \left( x\right) }\right) }{\exists {x\chi }\left( {x, b}\right) }\xrightarrow[]{\exists \text{ Intro }}\text{ Elim } \\ \end{array} \]
|
Yes
|
Give a derivation of the formula \( \neg \forall {x\varphi }\left( x\right) \) from the assumptions \( \forall {x\varphi }\left( x\right) \rightarrow \exists {y\psi }\left( y\right) \) and \( \neg \exists {y\psi }\left( y\right) \).
|
\[ \frac{\neg \exists {y\psi }\left( y\right) \;\frac{\forall {x\varphi }\left( x\right) \rightarrow \exists {y\psi }\left( y\right) \;{\left\lbrack \forall x\varphi \left( x\right) \right\rbrack }^{1}}{\exists {y\psi }\left( y\right) } \rightarrow \text{ Elim }}{\frac{1}{\neg \forall {x\varphi }\left( x\right) }\neg \text{ Intro }} \rightarrow \text{ Elim } \]
|
Yes
|
Proposition 18.13 (Reflexivity). If \( \varphi \in \Gamma \), then \( \Gamma \vdash \varphi \) .
|
Proof. The assumption \( \varphi \) by itself is a derivation of \( \varphi \) where every undischarged assumption (i.e., \( \varphi \) ) is in \( \Gamma \) .
|
Yes
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.